Advertisement

Impact of inquiry interventions on students in e-learning and classroom environments using affective computing framework

  • T. S. AshwinEmail author
  • Ram Mohana Reddy Guddeti
Article
  • 52 Downloads

Abstract

Effective teaching strategies improve the students’ learning rate within academic learning time. Inquiry-based instruction is one of the effective teaching strategies used in the classrooms. But these teaching strategies are not adapted in other learning environments like intelligent tutoring systems, including auto tutors. In this paper, we propose an automatic inquiry-based instruction teaching strategy, i.e., inquiry intervention using students’ affective states. The proposed model contains two modules: the first module consists of the proposed framework for predicting the unobtrusive multi-modal students’ affective states (teacher-centric attentive and in-attentive states) using the facial expressions, hand gestures and body postures. The second module consists of the proposed automated inquiry-based instruction teaching strategy to compare the learning outcomes with and without inquiry intervention using affective state transitions for both an individual and a group of students. The proposed system is tested on four different learning environments, namely: e-learning, flipped classroom, classroom and webinar environments. Unobtrusive recognition of students’ affective states is performed using deep learning architectures. After student-independent tenfold cross-validation, we obtained the students’ affective state classification accuracy of 77% and object localization accuracy of 81% using students’ faces, hand gestures and body postures. The overall experimental results demonstrate that there is a positive correlation with \(r=0.74\) between students’ affective states and their performance. Proposed inquiry intervention improved the students’ performance as there is a decrease of 65%, 43%, 43%, and 53% in overall in-attentive affective state instances using the inquiry interventions in e-learning, flipped classroom, classroom and webinar environments, respectively.

Keywords

Affective computing Facial emotion recognition Convolutional Neural Network Affective states Student engagement Inquiry-based instruction Automatic intervention 

Notes

Acknowledgements

The authors wish to thank undergraduates, postgraduates and doctoral research students of Department of Information Technology, National Institute of Technology Karnataka Surathkal, Mangalore, India, for their voluntary help in creating the affective database for both e-learning and classroom environments.

Compliance with ethical standards

Ethical approval

The experimental procedure, participants and the course contents used in the experiment are approved by the Institutional Ethics Committee (IEC) of NITK Surathkal, Mangalore, India. The participants were also informed that they had the right to quit the experiment at any time. The video recordings of the subjects were included in the experiment only after they gave written consent for the use of their videos for this research experiment. All the subjects were also agreed to use their facial expressions, hand gestures and body postures for all the process involved in the completion of the entire project.

References

  1. Ahlfeldt, S., Mehta, S., Sellnow, T.: Measurement and analysis of student engagement in university classes where varying levels of PBL methods of instruction are in use. Higher Educ. Res. Dev. 24(1), 5–20 (2005)CrossRefGoogle Scholar
  2. Alameda-Pineda, X., Staiano, J., Subramanian, R., Batrinca, L., Ricci, E., Lepri, B., Lanz, O., Sebe, N.: Salsa: a novel dataset for multimodal group behavior analysis. IEEE Trans. Pattern Anal. Mach. Intell. 38(8), 1707–1720 (2016)CrossRefGoogle Scholar
  3. Almeda, M.V.Q., Baker, R.S., Corbett, A.: Help avoidance: when students should seek help, and the consequences of failing to do so. In: Meeting of the Cognitive Science Society (2017)Google Scholar
  4. Arroyo, I., Cooper, D.G., Burleson, W., Woolf, B.P., Muldner, K., Christopherson, R.: Emotion sensors go to school. AIED 200, 17–24 (2009)Google Scholar
  5. Arroyo, I., Woolf, B.P., Burelson, W., Muldner, K., Rai, D., Tai, M.: A multimedia adaptive tutoring system for mathematics that addresses cognition, metacognition and affect. Int. J. Artif. Intell. Educ. 24(4), 387–426 (2014)CrossRefGoogle Scholar
  6. Ashwin, T., Guddeti, R.M.R.: Unobtrusive students’ engagement analysis in computer science laboratory using deep learning techniques. In: 2018 IEEE 18th International Conference on Advanced Learning Technologies (ICALT). IEEE, pp. 436–440 (2018)Google Scholar
  7. Ashwin, T., Guddeti, R.M.R.: Automatic detection of students’ affective states in classroom environment using hybrid convolutional neural networks. Educ. Inf. Technol. 2018, 1–29 (2019a)Google Scholar
  8. Ashwin, T., Guddeti, R.M.R.: Unobtrusive behavioral analysis of students in classroom environment using non-verbal cues. IEEE Access 7, 150,693–150,709 (2019b)CrossRefGoogle Scholar
  9. Ashwin, T., Jose, J., Raghu, G., Reddy, G.R.M.: An e-learning system with multifacial emotion recognition using supervised machine learning. In: 2015 IEEE Seventh International Conference on Technology for Education (T4E). IEEE, pp. 23–26 (2015)Google Scholar
  10. Balaam, M., Fitzpatrick, G., Good, J., Luckin, R.: Exploring affective technologies for the classroom with the subtle stone. In: Proceedings of the SIGCHI Conference on Human Factors in Computing Systems. ACM, pp. 1623–1632 (2010)Google Scholar
  11. Bodily, R., Verbert, K.: Review of research on student-facing learning analytics dashboards and educational recommender systems. IEEE Trans. Learn. Technol. 10(4), 405–418 (2017)CrossRefGoogle Scholar
  12. Bonwell, C.C., Eison, J.A.: Active learning: creating excitement in the classroom. 1991 ASHE-ERIC Higher Education Reports. ERIC (1991)Google Scholar
  13. Booth, B.M., Ali, A.M., Narayanan, S.S., Bennett, I., Farag, A.A.: Toward active and unobtrusive engagement assessment of distance learners. In: 2017 Seventh International Conference on Affective Computing and Intelligent Interaction (ACII). IEEE, pp. 470–476 (2017)Google Scholar
  14. Bosch, N., D’mello, S.K., Ocumpaugh, J., Baker, R.S., Shute, V.: Using video to automatically detect learner affect in computer-enabled classrooms. ACM Trans. Interact. Intell. Syst. 6(2), 17 (2016)CrossRefGoogle Scholar
  15. Brown, B.W., Saks, D.H.: Measuring the effects of instructional time on student learning: evidence from the beginning teacher evaluation study. Am. J. Educ. 94(4), 480–500 (1986)CrossRefGoogle Scholar
  16. Burnik, U., Zaletelj, J., Košir, A.: Video-based learners’ observed attention estimates for lecture learning gain evaluation. Multimed. Tools Appl. 77, 16903–16926 (2017)CrossRefGoogle Scholar
  17. Calvo, R.A., D’Mello, S.: Affect detection: an interdisciplinary review of models, methods, and their applications. IEEE Trans. Affect. Comput. 1(1), 18–37 (2010)CrossRefGoogle Scholar
  18. Castellanos, J., Haya, P., Urquiza-Fuentes, J.: A novel group engagement score for virtual learning environments. IEEE Trans. Learn. Technol. 99, 1 (2017)Google Scholar
  19. Chi, M., VanLehn, K., Litman, D., Jordan, P.: Empirically evaluating the application of reinforcement learning to the induction of effective and adaptive pedagogical strategies. User Model. User Adapt. Int. 21(1–2), 137–180 (2011)CrossRefGoogle Scholar
  20. Coffrin, C., Corrin, L., de Barba, P., Kennedy, G.: Visualizing patterns of student engagement and performance in moocs. In: Proceedings of the Fourth International Conference on Learning Analytics and Knowledge. ACM, pp. 83–92 (2014)Google Scholar
  21. Conati, C.: Probabilistic assessment of user’s emotions in educational games. Appl. Artif. Intell. 16(7–8), 555–575 (2002)CrossRefGoogle Scholar
  22. Dhall, A., Goecke, R., Gedeon, T.: Automatic group happiness intensity analysis. IEEE Trans. Affect. Comput. 6(1), 13–26 (2015)CrossRefGoogle Scholar
  23. Dhamija, S.: Learning based visual engagement and self-efficacy. In: 2017 Seventh International Conference on Affective Computing and Intelligent Interaction (ACII). IEEE, pp. 581–585 (2017)Google Scholar
  24. Dhamija, S., Boult, T.E.: Automated mood-aware engagement prediction. In: 2017 Seventh International Conference on Affective Computing and Intelligent Interaction (ACII). IEEE, pp. 1–8 (2017)Google Scholar
  25. D’mello, S., Graesser, A.: Autotutor and affective autotutor: learning by talking with cognitively and emotionally intelligent computers that talk back. ACM Trans. Interact. Intell. Syst. 2(4), 23 (2012)Google Scholar
  26. D’Mello, S., Picard, R.W., Graesser, A.: Toward an affect-sensitive autotutor. IEEE Intell. Syst. 22(4), 53 (2007)CrossRefGoogle Scholar
  27. D’Mello, S.K., Lehman, B., Person, N.: Monitoring affect states during effortful problem solving activities. Int. J. Artif. Intell. Educ. 20(4), 361–389 (2010)Google Scholar
  28. D’Mello, S.K., Mills, C., Bixler, R., Bosch, N.: Zone out no more: mitigating mind wandering during computerized reading. In: EDM (2017)Google Scholar
  29. D’Mello, S.: Monitoring affective trajectories during complex learning. In: Seel, N.M. (ed.) Encyclopedia of the Sciences of Learning. Springer, Boston, pp. 2325–2328 (2012)CrossRefGoogle Scholar
  30. Edwards, S.: Active learning in the middle grades. Middle Sch. J. 46(5), 26–32 (2015)CrossRefGoogle Scholar
  31. Ekman, P.: An argument for basic emotions. Cognit. Emot. 6(3–4), 169–200 (1992)CrossRefGoogle Scholar
  32. Girshick, R.: Fast r-cnn. In: Proceedings of the IEEE international conference on computer vision, pp. 1440–1448 (2015)Google Scholar
  33. Girshick, R., Donahue, J., Darrell, T., Malik, J.: Rich feature hierarchies for accurate object detection and semantic segmentation. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 580–587 (2014)Google Scholar
  34. Grafsgaard, J.F., Wiggins, J.B., Boyer, K.E., Wiebe, E.N., Lester, J.C.: Embodied affect in tutorial dialogue: student gesture and posture. In: International Conference on Artificial Intelligence in Education. Springer, pp. 1–10 (2013)Google Scholar
  35. Grafsgaard, J.F., Wiggins, J.B., Vail, A.K., Boyer, K.E., Wiebe, E.N., Lester, J.C.: The additive value of multimodal features for predicting engagement, frustration, and learning during tutoring. In: Proceedings of the 16th International Conference on Multimodal Interaction. ACM, pp. 42–49 (2014)Google Scholar
  36. Grann, J., Bushway, D.: Competency map: visualizing student learning to promote student success. In: Proceedings of the Fourth International Conference on Learning Analytics and Knowledge. ACM, pp. 168–172 (2014)Google Scholar
  37. Gupta, A., D’Cunha, A., Awasthi, K., Balasubramanian, V.: Daisee: Towards user engagement recognition in the wild (2016). arXiv preprint arXiv:1609.01885
  38. Gupta, S.K., Ashwin, T.S., Guddeti, R.M.R.: Students’ affective content analysis in smart classroom environment using deep learning techniques. Multimed. Tools Appl. 78(18), 25,321–25,348 (2019).  https://doi.org/10.1007/s11042-019-7651-z CrossRefGoogle Scholar
  39. He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016)Google Scholar
  40. Holmes, M., Latham, A., Crockett, K., O’Shea, J.D.: Near real-time comprehension classification with artificial neural networks: decoding e-learner non-verbal behavior. IEEE Trans. Learn. Technol. 11(1), 5–12 (2018)CrossRefGoogle Scholar
  41. Hrastinski, S.: Asynchronous and synchronous e-learning. Educ. Q. 31(4), 51–55 (2008)Google Scholar
  42. Hu, M., Li, H.: Student engagement in online learning: a review. In: 2017 International Symposium on Educational Technology (ISET). IEEE, pp. 39–43 (2017)Google Scholar
  43. Huang, X., Dhall, A., Goecke, R., Pietikäinen, M., Zhao, G.: Multimodal framework for analyzing the affect of a group of people. IEEE Trans. Multimed. 20(10), 2706–2721 (2018)CrossRefGoogle Scholar
  44. Hutt, S., Mills, C., Bosch, N., Krasich, K., Brockmole, J., D’Mello, S.: Out of the fr-eye-ing pan: towards gaze-based models of attention during learning with technology in the classroom. In: Proceedings of the 25th Conference on User Modeling, Adaptation and Personalization. ACM, pp. 94–103 (2017)Google Scholar
  45. Kim, Y., Jeong, S., Ji, Y., Lee, S., Kwon, K.H., Jeon, J.W.: Smartphone response system using twitter to enable effective interaction and improve engagement in large classrooms. IEEE Trans. Educ. 58(2), 98–103 (2015)CrossRefGoogle Scholar
  46. Klein, R., Celik, T.: The wits intelligent teaching system: detecting student engagement during lectures using convolutional neural networks. In: 2017 IEEE International Conference on Image Processing (ICIP). IEEE, pp. 2856–2860 (2017)Google Scholar
  47. Krizhevsky, A., Sutskever, I., Hinton, G.E.: ImageNet classification with deep convolutional neural networks. In: Pereira, F., Burges, C.J.C., Bottou, L., Weinberger, K.Q. (eds.) Proceedings of the 25th International Conference on Neural Information Processing Systems - Volume 1 (NIPS’12), vol. 1. Curran Associates Inc., Lake Tahoe, pp. 1097–1105 (2012)Google Scholar
  48. Ku, K.Y., Ho, I.T., Hau, K.T., Lai, E.C.: Integrating direct and inquiry-based instruction in the teaching of critical thinking: an intervention study. Instr. Sci. 42(2), 251–269 (2014)CrossRefGoogle Scholar
  49. Kulik, J.A., Fletcher, J.: Effectiveness of intelligent tutoring systems: a meta-analytic review. Rev. Educ. Res. 86(1), 42–78 (2016)CrossRefGoogle Scholar
  50. Lallé, S., Conati, C., Carenini, G.: Predicting confusion in information visualization from eye tracking and interaction data. In: IJCAI, pp. 2529–2535 (2016)Google Scholar
  51. Liu, M., Calvo, R.A., Pardo, A., Martin, A.: Measuring and visualizing students’ behavioral engagement in writing activities. IEEE Trans. Learn. Technol. 8, 215–224 (2015)CrossRefGoogle Scholar
  52. Liu, W., Anguelov, D., Erhan, D., Szegedy, C., Reed, S., Fu, C.Y., Berg, A.C.: Ssd: Single shot multibox detector. In: European Conference on Computer Vision. Springer, pp. 21–37 (2016)Google Scholar
  53. Maneeratana, K., Tiamsa-Ad, U., Ruengsomboon, T., Chawalitrujiwong, A., Aksomsiri, P., Asawapithulsert, K.: Class-wide course feedback methods by student engagement program. In: 2017 IEEE 6th International Conference on Teaching, Assessment, and Learning for Engineering (TALE). IEEE, pp. 393–398 (2017)Google Scholar
  54. Mills, C., Wu, J., D’Mello, S.: Being sad is not always bad: the influence of affect on expository text comprehension. Discourse Process. 56(2), 99–116 (2019)CrossRefGoogle Scholar
  55. Monkaresi, H., Bosch, N., Calvo, R.A., D’Mello, S.K.: Automated detection of engagement using video-based estimation of facial expressions and heart rate. IEEE Trans. Affect. Comput. 8(1), 15–28 (2017)CrossRefGoogle Scholar
  56. Moore, S., Stamper, J.: Decision support for an adversarial game environment using automatic hint generation. In: International Conference on Intelligent Tutoring Systems. Springer, pp. 82–88 (2019)Google Scholar
  57. Patwardhan, A.S., Knapp, G.M.: Affect intensity estimation using multiple modalities. In: The Twenty-Seventh International Flairs Conference (2014)Google Scholar
  58. Picard, R.W., Picard, R.: Affective Computing, vol. 252. MIT Press, Cambridge (1997)Google Scholar
  59. Psaltis, A., Apostolakis, K.C., Dimitropoulos, K., Daras, P.: Multimodal student engagement recognition in prosocial games. In: IEEE Transactions on Computational Intelligence and AI in Games (2017)Google Scholar
  60. Radeta, M., Maiocchi, M.: Towards automatic and unobtrusive recognition of primary-process emotions in body postures. In: 2013 Humaine Association Conference on Affective Computing and Intelligent Interaction (ACII). IEEE, pp. 695–700 (2013)Google Scholar
  61. Rajendran, R., Iyer, S., Murthy, S.: Personalized affective feedback to address students frustration in its. In: IEEE Transactions on Learning Technologies (2018)Google Scholar
  62. Redmon, J., Divvala, S., Girshick, R., Farhadi, A.: You only look once: unified, real-time object detection. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 779–788 (2016)Google Scholar
  63. Ren, S., He, K., Girshick, R., Sun, J.: Faster r-cnn: Towards real-time object detection with region proposal networks. In: Advances in Neural Information Processing Systems, pp. 91–99 (2015)Google Scholar
  64. Rowe, J., Mott, B., McQuiggan, S., Robison, J., Lee, S., Lester, J.: Crystal island: a narrative-centered learning environment for eighth grade microbiology. In: Workshop on Intelligent Educational Games at the 14th International Conference on Artificial Intelligence in Education. Brighton, UK, pp. 11–20 (2009)Google Scholar
  65. Russell, J.A.: A circumplex model of affect. J. Pers. Soc. Psychol. 39(6), 1161 (1980)CrossRefGoogle Scholar
  66. Sekachev, Boris, Nikita, M., Andrey, Z.: Computer vision annotation tool: a universal approach to data annotation (2019). https://github.com/opencv/cvat
  67. Sidney, K.D., Craig, S.D., Gholson, B., Franklin, S., Picard, R., Graesser, A.C.: Integrating affect sensors in an intelligent tutoring system. In: Affective Interactions: The Computer in the Affective Loop Workshop, pp. 7–13 (2005)Google Scholar
  68. Silfver, E., Jacobsson, M., Arnell, L., Bertilsdotter-Rosqvist, H., Härgestam, M., Sjöberg, M., Widding, U.: Classroom bodies: affect, body language, and discourse when schoolchildren encounter national tests in mathematics. Gend. Educ. 1, 1–15 (2018)CrossRefGoogle Scholar
  69. Silva, P., Costa, E., de Araújo, J.R.: An adaptive approach to provide feedback for students in programming problem solving. In: International Conference on Intelligent Tutoring Systems. Springer, pp. 14–23 (2019)Google Scholar
  70. Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition (2014). arXiv preprint arXiv:1409.1556
  71. Sinatra, G.M., Heddy, B.C., Lombardi, D.: The challenges of defining and measuring student engagement in science. Educ. Psychol. 50(1), 1–13 (2015)CrossRefGoogle Scholar
  72. Singh, A., Karanam, S., Kumar, D.: Constructive learning for human-robot interaction. IEEE Potentials 32, 13–19 (2013)CrossRefGoogle Scholar
  73. Slater, S., Joksimović, S., Kovanovic, V., Baker, R.S., Gasevic, D.: Tools for educational data mining: a review. J. Educ. Behav. Stat. 42(1), 85–106 (2017)CrossRefGoogle Scholar
  74. Stewart, A., Bosch, N., Chen, H., Donnelly, P., D’Mello, S.: Face forward: Detecting mind wandering from video during narrative film comprehension. In: International Conference on Artificial Intelligence in Education. Springer, pp. 359–370 (2017)Google Scholar
  75. Sun, B., Wei, Q., Li, L., Xu, Q., He, J., Yu, L.: Lstm for dynamic emotion and group emotion recognition in the wild. In: Proceedings of the 18th ACM International Conference on Multimodal Interaction. ACM, pp. 451–457 (2016)Google Scholar
  76. Sun, M.C., Hsu, S.H., Yang, M.C., Chien, J.H.: Context-aware cascade attention-based rnn for video emotion recognition. In: 2018 First Asian Conference on Affective Computing and Intelligent Interaction (ACII Asia). IEEE, pp. 1–6 (2018)Google Scholar
  77. Szegedy, C., Vanhoucke, V., Ioffe, S., Shlens, J., Wojna, Z.: Rethinking the inception architecture for computer vision. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 2818–2826 (2016)Google Scholar
  78. Szegedy, C., Ioffe, S., Vanhoucke, V., Alemi, A.A.: Inception-v4, inception-resnet and the impact of residual connections on learning. In: AAAI, pp. 4278–4284 (2017)Google Scholar
  79. Thomas, C., Jayagopi, D.B.: Predicting student engagement in classrooms using facial behavioral cues. In: Proceedings of the 1st ACM SIGCHI International Workshop on Multimodal Interaction for Education. ACM, pp. 33–40 (2017)Google Scholar
  80. Tiam-Lee, T.J., Sumi, K.: Analysis and prediction of student emotions while doing programming exercises. In: International Conference on Intelligent Tutoring Systems. Springer, pp. 24–33 (2019)Google Scholar
  81. Tucker, B.: The flipped classroom. Educ. Next 12(1), 82–83 (2012)Google Scholar
  82. Van der Sluis, F., Ginn, J., Van der Zee, T.: Explaining student behavior at scale: the influence of video complexity on student dwelling time. In: Proceedings of the Third ACM Conference on Learning@ Scale. ACM, pp. 51–60 (2016)Google Scholar
  83. Walker, E., Ogan, A., Aleven, V., Jones, C.: Two approaches for providing adaptive support for discussion in an ill-defined domain. Intelligent Tutoring Systems for Ill-Defined Domains: Assessment and Feedback in Ill-Defined Domains 1 (2008)Google Scholar
  84. Wang, S., Ji, Q.: Video affective content analysis: a survey of state-of-the-art methods. IEEE Trans. Affect. Comput. 6(4), 410–430 (2015)CrossRefGoogle Scholar
  85. Watson, D., Tellegen, A.: Toward a consensual structure of mood. Psychol. Bull. 98(2), 219 (1985)CrossRefGoogle Scholar
  86. Whitehill, J., Serpell, Z., Lin, Y.C., Foster, A., Movellan, J.R.: The faces of engagement: automatic recognition of student engagement from facial expressions. IEEE Trans. Affect. Comput. 5(1), 86–98 (2014)CrossRefGoogle Scholar
  87. Woolf, B., Burleson, W., Arroyo, I., Dragon, T., Cooper, D., Picard, R.: Affect-aware tutors: recognising and responding to student affect. Int. J. Learn. Technol. 4(3–4), 129–164 (2009)CrossRefGoogle Scholar
  88. Xia, X., Liu, J., Yang, T., Jiang, D., Han, W., Sahli, H.: Video emotion recognition using hand-crafted and deep learning features. In: 2018 First Asian Conference on Affective Computing and Intelligent Interaction (ACII Asia). IEEE, pp. 1–6 (2018)Google Scholar
  89. Yousuf, B., Conlan, O.: Supporting student engagement through explorable visual narratives. IEEE Trans. Learn. Technol. 11, 307 (2017)CrossRefGoogle Scholar
  90. Yu, Y.C.: Teaching with a dual-channel classroom feedback system in the digital classroom environment. IEEE Trans. Learn. Technol. 10(3), 391–402 (2017)CrossRefGoogle Scholar
  91. Yun, W.H., Lee, D., Park, C., Kim, J., Kim, J.: Automatic recognition of children engagement from facial video using convolutional neural networks. IEEE Trans. Affect. Comput. 6, 209 (2018)Google Scholar
  92. Zaletelj, J., Košir, A.: Predicting students’ attention in the classroom from kinect facial and body features. EURASIP J. Image Video Process. 2017(1), 80 (2017)CrossRefGoogle Scholar

Copyright information

© Springer Nature B.V. 2020

Authors and Affiliations

  1. 1.NITK SurathkalMangaloreIndia

Personalised recommendations