Skip to main content
Log in

Using RGBD cameras for classifying learning and teacher interaction through postural attitude

  • Original Paper
  • Published:
International Journal on Interactive Design and Manufacturing (IJIDeM) Aims and scope Submit manuscript

Abstract

Globally the improvement and evaluation of the academic performance of students has been a priority, however the way in which the quality of learning process is evaluated within the classroom is based on traditional methods such as grades or perception surveys. Additionally, measuring continuously the performance of the student, teacher and its interaction in the classroom is difficult because there are several internal and external factors that can affect the pedagogical practice in the classroom or e-learning environments, and currently, their effects are not completely understood. Currently, advances in motion tracking through low cost devices such RGBD cameras allows the real-time monitoring of persons posture inside closed spaces such a classroom. Some research projects have associated posture with affective and cognitive state, but as far as we know none have proposed an approach to classify learning and teacher interaction using posture. An approach that uses a set of performance metrics of the student and teacher, in order to classify whether learning and teacher-student interaction was successful is developed and tested. This was an experimental design using an experimental and control group in order to evaluate if it is possible to classify between poor and good interaction between teacher and student. The results showed that it is possible to classify between poor and good interaction between teacher and student, besides the best method of classification is the approach based on neural networks with an accuracy of 76%. The proposed approach could classify whether an interaction between the student and the teacher was good or not. The results showed that the best method of classification was the approach based on neural networks with an accuracy of 78%.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6
Fig. 7

Similar content being viewed by others

References

  1. OECD: Country note. Programme for international student assessment (PISA) Results from PISA (2018) [Online]. Available: https://www.oecd.org/pisa/publications/PISA2018_CN_COL.pdf

  2. Ministerio de Educación Nacional (2008) ¿Qué y cómo mejorar a partir de las pruebas PISA?(No.44) [Online]. Available: mineducacion.gov.co/1621/article-162392.html

  3. Mullis, I. V. S., Martin, M. O., Kennedy, A. M., Trong, K. L., & Sainsbury, M.: PIRLS 2011 international results in reading, MA: TIMSS & PIRLS international study center, Chesnut Hill, (2012). Available: https://timssandpirls.bc.edu/pirls2011/downloads/P11_IR_FullBook.pdf

  4. Aquevedo, A. (2012). Educación en el mundo 2012: nuevos datos TIMSS evidencian nuevamente precariedades de Chile y A. Latina [Online]. Available: https://aquevedo.wordpress.com/2012/12/13/educacion-en-el-mundo-2012-nuevos-datos-timss-evidencian-nuevamente-precariedades-de-chile-y-a-latina/

  5. Al-Hunaiyyan, A., Al-Sharhan, S., Alhajri, R.: A new mobile learning model in the context of the smart classrooms environment: a holistic approach. Int. J. Interact. Mob. Technol. 11(3), 39–56 (2017). https://doi.org/10.3991/ijim.v11i3.6186

    Article  Google Scholar 

  6. Castelli, F.R., Sarvary, M.A.: Why students do not turn on their video cameras during online classes and an equitable and inclusive plan to encourage them to do so. Ecol. Evol. 11(8), 3565–3576 (2021). https://doi.org/10.1002/ece3.7123

    Article  Google Scholar 

  7. Di, W., Danxia, X., Chun, L.: The effects of learner factors on higher-order thinking in the smart classroom environment. Comput. Educ. J. 6, 483–498 (2019). https://doi.org/10.1007/s40692-019-00146-4

    Article  Google Scholar 

  8. Saini, M.K., Goel, N.: How smart are smart classrooms? A review of smart classroom technologies. ACM Comput. Surv. 52(6), 130 (2019). https://doi.org/10.1145/3365757

    Article  Google Scholar 

  9. Thees, M., Kapp, S., Strzys, M.P., Beil, F., Lukowicz, P., Kuhn, J.: Effects of augmented reality on learning and cognitive load in university physics laboratory courses. Comput. Hum. Behav. 108, 106316 (2020). https://doi.org/10.1016/j.chb.2020.106316

    Article  Google Scholar 

  10. Salcedo, A.: Deserción Universitaria en Colombia. Acad. Virtual. 3(1), 1–9 (2010)

    MathSciNet  Google Scholar 

  11. Gaviria, A. and Barrientos, J.: Determinantes de la calidad de la educación en Colombia. Archivos de Economía, Departamento Nacional de Planeación, no. 159 (2001)

  12. Hiebert, J., Morris, A., Berk, D., Jansen, A.: Preparing teachers to learn from teaching. J. Teach. Educ. 58(1), 47–54 (2007)

    Article  Google Scholar 

  13. Ernest, P., Guitert, M., Hampel, R., Heiser, S., Hopkins, J., Murphy, L., Stickler, U.: Online teacher development: collaborating in a virtual learning environment. Comput. Assist. Lang. Learn. 26(4), 311–333 (2013)

    Article  Google Scholar 

  14. Wilkins, J.L.M.: The relationship among elementary teachers’ content knowledge, attitudes, beliefs, and practices. J. Math. Teach. Educ. 11, 139–164 (2008). https://doi.org/10.1007/s10857-007-9068-2

    Article  Google Scholar 

  15. Gess-Newsome, J., Taylor, J.A., Carlson, J., Gardner, A.L., Wilson, C.D., Stuhlsatz, M.A.M.: Teacher pedagogical content knowledge, practice, and student achievement. Int. J. Sci. Educ. 41, 944–963 (2019). https://doi.org/10.1080/09500693.2016.1265158

    Article  Google Scholar 

  16. Chen, C.-C., Huang, T.-C.: Learning in a u-museum: developing a context-aware ubiquitous learning environment. Comput. Educ. 59(3), 873–883 (2012)

    Article  Google Scholar 

  17. Papatheodorou, C., Antoniou, G., and Bikakis, A.: On the deployment of contextual reasoning in ambient intelligence environments. In: 6th International conference on intelligent environments (EI), pp. 13–18, 2010.

  18. Mehendale, N.: Facial emotion recognition using convolutional neural networks (FERC). SN Appl. Sci. 2(3), 446 (2020). https://doi.org/10.1007/s42452-020-2234-1

    Article  Google Scholar 

  19. Andrejevic, M., Selwyn, N.: Facial recognition technology in schools: critical questions and concerns. Learn. Media Technol. 45(2), 115–128 (2020). https://doi.org/10.1080/17439884.2020.1686014

    Article  Google Scholar 

  20. Karg, M., Samadani, A.A., Gorbet, R., Kühnlenz, K., Hoey, J., Kulić, D.: Body movements for affective expression: a survey of automatic recognition and generation. IEEE Trans. Affect. Comput. 4(4), 341–359 (2013). https://doi.org/10.1109/T-AFFC.2013.28

    Article  Google Scholar 

  21. Panetta, K., Wan, Q., Agaian, S., Rajeev, S., Kamath, S., Rajendran, R., et al.: A comprehensive database for benchmarking imaging systems. IEEE Trans. Pattern Anal. Mach. Intell. 42(3), 509–520 (2020). https://doi.org/10.1109/TPAMI.2018.2884458

    Article  Google Scholar 

  22. Phoong, S.Y., Phoong, S.W., Moghavvemi, S., Sulaiman, A.: Effect of smart classroom on student achievement at higher education. J. Educ. Technol. Syst 48(2), 291–304 (2019). https://doi.org/10.1177/0047239519870721

    Article  Google Scholar 

  23. Torres, D.M.A.: Los smart classroom y ambientes de aprendizaje híbridos ¿dónde está la innovación? Sol de Aquino 21, 58–61 (2022)

    Google Scholar 

  24. Kapoor, A., Picard, R. W., and Ivanov, Y.: Probabilistic combination of multiple modalities to detect interest. In: 17th ICPR, vol. 3, pp. 969–972 (2004)

  25. Craig, S. D., D'Mello, S. K., Gholson, B., Witherspoon, A., Sullins, J., and Graesser, A. C.: Emotions during learning: the first steps toward an affect sensitive intelligent tutoring system. In: J. Nall & R. (2004)

  26. Stevenson, A., Bailenson, J., Stathatos, S.: Automatically detected nonverbal behavior predicts creativity in collaborating dyads. J. Nonverbal Behav. 38, 389–408 (2014)

    Article  Google Scholar 

  27. Mota, S.: Automated posture analysis for detecting learner's affective state. In: PhD thesis, School of Architecture and Planning, MIT (2002)

  28. Peper, E., Wilson, V., Martin, M., Rosegard, E., Harvey, R.: Avoid zoom fatigue, be present and learn. Neuroregulation 8(1), 47–47 (2021). https://doi.org/10.15540/nr.8.1.47

    Article  Google Scholar 

  29. Margetis, G. et al.: Towards ambient intelligence in the classroom. In: Proceedings 6th UAHCI - Volume Part IV, pp. 577–586 (2011)

  30. Antona, M. et al.: A student-centric intelligent classroom. J Ambient Intell pp. 248–252 (2011)

  31. Xie, W. et al.: Smart classroom - an intelligent environment for tele-education. In: Proceedings 2nd IEEE PCM, pp. 662–668 (2001)

  32. Diaz, C., Hincapié, M., Moreno, G., and Guerra, W.: Descripción de una Aula de Clase Inteligente e Implementación de su Componente de Interacción. In: Proceedings SICC (2015)

  33. Niebles, J., Chen, C.-W., and Fei-Fei, L.: Modeling temporal structure of decomposable motion segments for activity classification. In: Proceedings 11th ECCV, Part II, pp. 392–405 (2010)

  34. Gligoric, N., Uzelac, A., and Krco, S.: Smart classroom: real-time feedback on lecture quality. In: IEEE PERCOM, pp. 391–394 (2012)

  35. Miraoui, M.: A context-aware smart classroom for enhanced learning environment. Int. J. Smart Sensing Intell. Syst. 11, 1–8 (2018). https://doi.org/10.21307/ijssis-2018-007

    Article  Google Scholar 

  36. Huang, L.-S., Su, J.-Y., Pao, T.-L.: A context aware smart classroom architecture for smart campuses. Appl. Sci. 9(9), 1837 (2019)

    Article  Google Scholar 

  37. Hong, J., Suh, E., Kim, S.: Context-aware systems: a literature review and classification. Expert Syst. Appl. 36(4), 8509–8522 (2009)

    Article  Google Scholar 

  38. Zhang, Y., Li, X., Zhu, L., Dong, X., and Hao, Q.: What is a smart classroom? A literature review. In: Shaping Future Schools with Digital Technology: An International Handbook, pp. 25–40 (2019)

  39. Kwet, M., Prinsloo, P.: The “smart” classroom: a new frontier in the age of the smart university. Teach. High. Educ. 25(4), 510–526 (2020). https://doi.org/10.1080/13562517.2020.1734922

    Article  Google Scholar 

  40. Paudel, P., Kim, S., Park, S., Choi, K.-H.: A context-aware IoT and deep-learning-based smart classroom for controlling demand and supply of power load. Electronics 9(6), 1039 (2020). https://doi.org/10.3390/electronics9061039

    Article  Google Scholar 

  41. Fischer, I. A., Pereira, E. P., Lopes, V., and Medina, R. D.: Improving efficiency and availability in smart classroom environments. In: 2019 IEEE 16th ICNSC pp. 52–56, (2019)

  42. Cebrián, G., Palau, R., Mogas, J.: The smart classroom as a means to the development of ESD methodologies. Sustain 12(7), 3010 (2020). https://doi.org/10.3390/su12073010

    Article  Google Scholar 

  43. Martin, J.-C., Devillers, L.: A Multimodal Corpus Approach for the Study of Spontaneous Emotions. Springer, Cham (2009)

    Book  Google Scholar 

  44. Zeng, Z., Pantic, M., et al.: A survey of affect recognition methods: audio, visual and spontaneous expressions. IEEE PAMI 31(1), 39–58 (2009)

    Article  Google Scholar 

  45. Ekman, P. and Friesen, W.: Manual for the facial action coding system. Environ. Psychol. Nonverbal Behav. (1978)

  46. Kleinsmith, A., Bianchi-Berthouze, N., Steed, A.: Automatic recognition of non-acted affective postures. IEEE Trans. Syst. Man Cybern. Syst. 41(4), 837–847 (2011)

    Google Scholar 

  47. Kleinsmith, A., and Bianchi-Berthouze, N.: Recognizing affective dimensions from body posture. In: Proceedings 2nd ACII, pp. 48–58 (2007)

  48. Glowinski, D., Dael, N., Camurri, A., Volpe, G., Mortillaro, M., Scherer, K.: Towards a minimal representation of affective gestures. IEEE Trans. Affect. Comput. 2(2), 106–118 (2011)

    Article  Google Scholar 

  49. Kleinsmith, A., De Silva, P., Bianchi-Berthouze, N.: Cross-cultural differences in recognizing affect from body posture. Interact. Comput. 18(6), 1371–1389 (2006)

    Article  Google Scholar 

  50. Castellano, G., Pereira, A., Paiva, A.: Automatic analysis of affective postures and body motion to detect engagement with a game companion. IEEE Trans. Affect. Comput. 2(3), 119–128 (2011)

    Google Scholar 

  51. Gunes, H., and Piccardi, M.: Fusing face and body display for bi-modal emotion recognition: single frame analysis and multi-frame post-integration. In: Proceedings 1st ACII, pp. 102–111 (2005)

  52. Omlor, L., and Giese, M.: Unsupervised Learning of Spatio-temporal Primitives of Emotional Gait. In: Perception and interactive technologies: international tutorial and research workshop, PIT 2006 Kloster Irsee, Germany, June 19-21, Proceedings (pp. 188-192), Springer (2006)

  53. Clark, R., Pua, Y.-H., Oliveira, C., Bower, K., Thilarajah, S., McGaw, R., Hasanki, K., Mentiplay, B.: Reliability and concurrent validity of the Microsoft Xbox One Kinect for assessment of standing balance and postural control. Gait Posture 42(2), 210–213 (2015)

    Article  Google Scholar 

  54. Diego-Mas, J., Alocaide-Marzal, J.: Using Kinect sensor in observational methods for assessing postures at work. Appl. Ergon 25(4), 976–985 (2014)

    Article  Google Scholar 

  55. Xu, X., McGorry, R.: The validity of the first and second generation Microsoft Kinect for identifying joint center locations during static postures. Appl. Ergon 49, 47–54 (2015)

    Article  Google Scholar 

  56. Su, C.-J., Chiang, C.-Y., Huang, J.-Y.: Kinect-enabled home-based rehabilitation system using dynamic time warping and fuzzy logic. Appl. Soft Comput. 22, 652–666 (2014)

    Article  Google Scholar 

  57. Ojuno, K., Yamashita, T., Fukui, H., Noridomi, S., Arata, K., and Yamauchi, Y.: Body posture and face orientation estimation by convolutional network with heterogenous learning. In: Proceedings IWAIT (2018)

  58. Mahata, J., and Phadikar, A.: Recent advances in human behaviour understanding: a survey. In: Proceedings DevIC (2017)

  59. Kobayashi, H. and Hara, F.: The recognition of basic facial expressions by neural network. In: Proceedings IJCNN, pp. 460–466 (1991)

  60. Kessous, A., Amir, S., Dubnov, G., and Reichart, D.: Multimodal emotion recognition from expressive faces, body gestures and speech. In: Artificial intelligence and innovations 2007: from theory to applications, vol. 247, Boston: Springer, pp. 375–388 (2007)

  61. Xavier-Junior et al.: Introducing affective agents in recommendation systems based on relational data clustering. In: Proceedings 22nd DEXA, Part II, Springer-Verlag Berlin Heidelberg, LNCS 6861, pp. 303–310 (2011)

  62. Samad, A. et al.: Adaptive-neuro fuzzy inference system for human posture classification using a simplified shock graph. In: Proceedings 4th IVIC, Springer-Verlag Berlin Heidelberg, LNCS 5857, pp. 585–595 (2009)

  63. Rezazadeh, I., Firoozabadi, S.: A novel human–machine interface based on recognition of multi-channel facial bioelectric signals. Aust. Phys. Eng. Sci. Med. 34, 497–513 (2011)

    Article  Google Scholar 

  64. Ren, H. and Xu, G.: Human action recognition in smart room. In: IEEE FG, Washington, DC, USA (2002)

  65. Di Mitri, D., Schneider, J., Specht, M., Drachsler, H.: From signals to knowledge: a conceptual model for multimodal learning analytics. J. Comput. Assist. Learn. 34(4), 338–349 (2018)

    Article  Google Scholar 

  66. Raca, M. and Dillenbourg, P.: Holistic analysis of the classroom. In: Proceedings of the 2014 ACM, pp. 13–20 (2014)

  67. Bosch, N.: Multimodal affect detection in the wild: accuracy, availability, and generalizability. In: Proceedings of the 2015 ACM ICMI, pp. 645–649 (2015)

  68. Sanghvi, J., Castellano, G., Leite, L., Pereira, A., McOwan, P.W., Paiva, A.: Automatic analysis of affective postures and body motion to detect engagement with a game companion. IEEE Trans. Syst. Man Cybern. 41(6), 1064–1077 (2011)

    Google Scholar 

  69. Piana, S., and Staglianò, A.: A set of full-body movement features for emotion recognition to help children affected by autism spectrum condition. InfoMus Lab (2009)

  70. Nixon, M. and Howard, A.: Applying gaming principles to virtual environments for upper extremity therapy games. In: Proceedings of the 7th ICDVRAT (2013)

  71. Castellano, G., Villalba, S., and Camurri, A.: Recognising human emotions from body movement and gesture dynamics. In: Proceedings ACII (2007)

  72. Castellano, G., Villalba, S., and Camurri, A.: Recognising human emotions from body movement and gesture dynamics, affective computing and intelligent interaction. In: Proceedings ACII 2007, pp. 71-82 (2007)

  73. Mancini, M., Castellano, G., Bevacqua, E., and Peters, C.: Copying behaviour of expressive motion. In: Computer vision/computer graphics collaboration techniques: third international conference, MIRAGE 2007, Rocquencourt, France, March 28-30, Proceedings 3, pp. 549–553 (2007)

Download references

Acknowledgements

The authors would like to acknowledge the financial and technical support of Writing Lab, TecLabs, Tecnologico de Monterrey, in the production of this work.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Alejandro Valencia-Arias.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Springer Nature or its licensor (e.g. a society or other partner) holds exclusive rights to this article under a publishing agreement with the author(s) or other rightsholder(s); author self-archiving of the accepted manuscript version of this article is solely governed by the terms of such publishing agreement and applicable law.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Hincapié, M., Díaz, C.A., Valencia-Arias, A. et al. Using RGBD cameras for classifying learning and teacher interaction through postural attitude. Int J Interact Des Manuf 17, 1755–1770 (2023). https://doi.org/10.1007/s12008-023-01262-3

Download citation

  • Received:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s12008-023-01262-3

Keywords

Navigation