Skip to main content

Role of Multimodal Learning Systems in Technology-Enhanced Learning (TEL): A Scoping Review

  • Conference paper
  • First Online:
Responsive and Sustainable Educational Futures (EC-TEL 2023)

Abstract

Technology-enhanced learning systems, specifically multimodal learning technologies, use sensors to collect data from multiple modalities to provide personalized learning support beyond traditional learning settings. However, many studies surrounding such multimodal learning systems mostly focus on technical aspects concerning data collection and exploitation and therefore overlook theoretical and instructional design aspects such as feedback design in multimodal settings. This paper explores multimodal learning systems as a critical part of technology-enhanced learning used for capturing and analyzing the learning process to exploit the collected multimodal data to generate feedback in multimodal settings. By investigating various studies, we aim to reveal the roles of multimodality in technology-enhanced learning across various learning domains. Our scoping review outlines the conceptual landscape of multimodal learning systems, identifies potential gaps, and provides new perspectives on adaptive multimodal system design: intertwining learning data for meaningful insights into learning, designing effective feedback, and implementing them in diverse learning domains.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 79.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 99.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Notes

  1. 1.

    https://openai.com/.

  2. 2.

    http://www.immersion.fr/.

References

  1. Ai, R., et al.: Sprinter: language technologies for interactive and multimedia language learning. In: Proceedings of the Ninth International Conference on Language Resources and Evaluation (LREC 2014), pp. 2733–2738 (2014)

    Google Scholar 

  2. Alenljung, B., Lindblom, J., Andreasson, R., Ziemke, T.: User experience in social human-robot interaction. In: Rapid automation: concepts, methodologies, tools, and applications, pp. 1468–1490. IGI Global (2019)

    Google Scholar 

  3. Arksey, H., O’Malley, L.: Scoping studies: towards a methodological framework. Int. J. Soc. Res. Methodol. 8(1), 19–32 (2005)

    Article  Google Scholar 

  4. Asadipour, A., Debattista, K., Chalmers, A.: Visuohaptic augmented feedback for enhancing motor skills acquisition. Vis. Comput. 33(4), 401–411 (2017)

    Article  Google Scholar 

  5. Asadipour, A., Debattista, K., Patel, V., Chalmers, A.: A technology-aided multi-modal training approach to assist abdominal palpation training and its assessment in medical education. Int. J. Hum. Comput. Stud. 137, 102394 (2020)

    Article  Google Scholar 

  6. Bernareggi, C., Ahmetovic, D., Mascetti, S.: \(\mu \)graph: haptic exploration and editing of 3D chemical diagrams. In: The 21st International ACM SIGACCESS Conference on Computers and Accessibility, pp. 312–317 (2019)

    Google Scholar 

  7. Brondi, R., Satler, M., Avizzano, C.A., Tripicchio, P.: A multimodal learning system for handwriting movements. In: 2014 International Conference on Intelligent Environments, pp. 256–259. IEEE (2014)

    Google Scholar 

  8. Brooke, J., et al.: SUS-A quick and dirty usability scale. Usability Eval. Ind. 189(194), 4–7 (1996)

    Google Scholar 

  9. Chen, H., Tan, E., Lee, Y., Praharaj, S., Specht, M., Zhao, G.: Developing AI into explanatory supporting models: an explanation-visualized deep learning prototype. In: The International Conference of Learning Science (ICLS) (2020)

    Google Scholar 

  10. Chollet, M., Ghate, P., Neubauer, C., Scherer, S.: Influence of individual differences when training public speaking with virtual audiences. In: Proceedings of the 18th International Conference on Intelligent Virtual Agents, pp. 1–7 (2018)

    Google Scholar 

  11. Cope, B., Kalantzis, M., Group, N.L.: Multiliteracies: Literacy Learning and the Design of Social Futures. Literacies (Routledge), Routledge (2000). https://books.google.nl/books?id=a6eBiUQLJu4C

  12. Cukurova, M., Giannakos, M., Martinez-Maldonado, R.: The promise and challenges of multimodal learning analytics. Br. J. Educ. Technol. 51(5), 1441–1449 (2020)

    Article  Google Scholar 

  13. Di Mitri, D., Schneider, J., Specht, M., Drachsler, H.: From signals to knowledge: a conceptual model for multimodal learning analytics. J. Comput. Assist. Learn. 34(4), 338–349 (2018)

    Article  Google Scholar 

  14. D’Mello, S., Graesser, A.: Mining bodily patterns of affective experience during learning. In: Educational data mining 2010 (2010)

    Google Scholar 

  15. Domínguez, F., Chiluiza, K.: Towards a distributed framework to analyze multimodal data. In: Proceedings of Workshop Cross-LAK-held at LAK 2016, pp. 52–57 (2016)

    Google Scholar 

  16. Edwards, B.I., Bielawski, K.S., Prada, R., Cheok, A.D.: Haptic virtual reality and immersive learning for enhanced organic chemistry instruction. Virtual Reality 23(4), 363–373 (2019)

    Article  Google Scholar 

  17. Ericsson, K.A., Krampe, R.T., Tesch-Römer, C.: The role of deliberate practice in the acquisition of expert performance. Psychol. Rev. 100(3), 363 (1993)

    Article  Google Scholar 

  18. Ezen-Can, A., Boyer, K.E., Kellogg, S., Booth, S.: Unsupervised modeling for understanding MOOC discussion forums: a learning analytics approach. In: Proceedings of the Fifth International Conference on Learning Analytics and Knowledge, pp. 146–150 (2015)

    Google Scholar 

  19. Fan, M., Antle, A.N., Cramer, E.S.: Design rationale: opportunities and recommendations for tangible reading systems for children. In: Proceedings of the The 15th International Conference on Interaction Design and Children, pp. 101–112 (2016)

    Google Scholar 

  20. Françoise, J., Fdili Alaoui, S., Schiphorst, T., Bevilacqua, F.: Vocalizing dance movement for interactive sonification of Laban effort factors. In: Proceedings of the 2014 Conference on Designing Interactive Systems, pp. 1079–1082 (2014)

    Google Scholar 

  21. Frolova, E.V., Rogach, O.V., Ryabova, T.M.: Digitalization of education in modern scientific discourse: new trends and risks analysis. Eur. J. Contemp. Educ. 9(2), 313–336 (2020)

    Google Scholar 

  22. Hassenzahl, M., Wiklund-Engblom, A., Bengs, A., Hägglund, S., Diefenbach, S.: Experience-oriented and product-oriented evaluation: psychological need fulfillment, positive affect, and product perception. Int. J. Hum. Comput. Interact. 31(8), 530–544 (2015)

    Article  Google Scholar 

  23. Hightower, B., Lovato, S., Davison, J., Wartella, E., Piper, A.M.: Haptic explorers: supporting science journaling through mobile haptic feedback displays. Int. J. Hum. Comput. Stud. 122, 103–112 (2019)

    Article  Google Scholar 

  24. Jakus, G., Stojmenova, K., Tomažič, S., Sodnik, J.: A system for efficient motor learning using multimodal augmented feedback. Multimedia Tools Appl. 76(20), 20409–20421 (2017)

    Article  Google Scholar 

  25. Järvelä, S., Malmberg, J., Haataja, E., Sobocinski, M., Kirschner, P.A.: What multimodal data can tell us about the students’ regulation of their learning process? Learn. Instruct. 72, 101203 (2021)

    Article  Google Scholar 

  26. Jia, J., He, Y., Le, H.: A multimodal human computer interaction system and its application in smart learning environments. In: Cheung, S.K.S., Li, R., Phusavat, K., Paoprasert, N., Kwok, L.F. (eds.) ICBL 2020. LNCS, vol. 12218, pp. 3–14. Springer, Cham (2020). https://doi.org/10.1007/978-3-030-51968-1_1

    Chapter  Google Scholar 

  27. Kasneci, E., et al.: ChatGPT for good? On opportunities and challenges of large language models for education. Learn. Individ. Differ. 103, 102274 (2023)

    Article  Google Scholar 

  28. Krathwohl, D.R.: A revision of bloom’s taxonomy: an overview. Theor. Pract. 41(4), 212–218 (2002)

    Article  Google Scholar 

  29. Lee, J., et al.: An intravenous injection simulator using augmented reality for veterinary education and its evaluation. In: Proceedings of the 11th ACM SIGGRAPH International Conference on Virtual-Reality Continuum and its Applications in Industry, pp. 31–34 (2012)

    Google Scholar 

  30. Lee, Y., Chen, H., Zhao, G., Specht, M.: WEDAR: webcam-based attention analysis via attention regulator behavior recognition with a novel E-reading dataset. In: 24th ACM International Conference on Multimodal Interaction (ICMI), pp. 319–328. ACM (2022)

    Google Scholar 

  31. Lee, Y., Specht, M.: Can we empower attentive E-reading with a social robot? An introductory study with a novel multimodal dataset and deep learning approaches. In: LAK23: 13th International Learning Analytics and Knowledge Conference (LAK 2023). ACM (2023)

    Google Scholar 

  32. Limbu, B.H., Jarodzka, H., Klemke, R., Specht, M.: Can you ink while you blink? Assessing mental effort in a sensor-based calligraphy trainer. Sensors 19(14), 3244 (2019). https://doi.org/10.3390/s19143244

    Article  Google Scholar 

  33. Liu, S., d’Aquin, M.: Unsupervised learning for understanding student achievement in a distance learning setting. In: 2017 IEEE Global Engineering Education Conference (EDUCON), pp. 1373–1377. IEEE (2017)

    Google Scholar 

  34. Maezawa, A., Yamamoto, K.: MuEns: a multimodal human-machine music ensemble for live concert performance. In: Proceedings of the 2017 CHI Conference on Human Factors in Computing Systems, pp. 4290–4301 (2017)

    Google Scholar 

  35. Maria, K., Filippeschi, A., Ruffaldi, E., Shorr, Y., Gopher, D.: Evaluation of multimodal feedback effects on the time-course of motor learning in multimodal VR platform for rowing training. In: 2015 International Conference on Virtual Rehabilitation (ICVR), pp. 158–159. IEEE (2015)

    Google Scholar 

  36. Mathew, A., Amudha, P., Sivakumari, S.: Deep learning techniques: an overview. Adv. Mach. Learn. Technol. Appl.: Proc. AMLTA 2020, 599–608 (2021)

    Google Scholar 

  37. Ochoa, X., Domínguez, F., Guamán, B., Maya, R., Falcones, G., Castells, J.: The rap system: automatic feedback of oral presentation skills using multimodal analysis and low-cost sensors. In: Proceedings of the 8th International Conference on Learning Analytics and Knowledge, pp. 360–364 (2018)

    Google Scholar 

  38. Ortegon, T., et al.: Prototyping interactive multimodal VR epidural administration. In: 2019 IEEE International Conference on Consumer Electronics (ICCE), pp. 1–3. IEEE (2019)

    Google Scholar 

  39. Oyedotun, T.D.: Sudden change of pedagogy in education driven by COVID-19: perspectives and evaluation from a developing country. Res. Globalization 2, 100029 (2020)

    Article  Google Scholar 

  40. Pardo, A., Jovanovic, J., Dawson, S., Gašević, D., Mirriahi, N.: Using learning analytics to scale the provision of personalised feedback. Br. J. Educ. Technol. 50(1), 128–138 (2019)

    Article  Google Scholar 

  41. Pires, A.C., et al.: Learning Maths with a tangible user interface: lessons learned through participatory design with children with visual impairments and their educators. Int. J. Child-Comput. Interact. 32, 100382 (2022)

    Article  Google Scholar 

  42. Plimmer, B., Reid, P., Blagojevic, R., Crossan, A., Brewster, S.: Signing on the tactile line: a multimodal system for teaching handwriting to blind children. ACM Trans. Comput. Hum. Interact. (TOCHI) 18(3), 1–29 (2011)

    Article  Google Scholar 

  43. Rahman, M.A., Brown, D.J., Shopland, N., Burton, A., Mahmud, M.: Explainable multimodal machine learning for engagement analysis by continuous performance Test. In: Antona, M., Stephanidis, C. (eds) Universal Access in Human-Computer Interaction. User and Context Diversity. HCII 2022. Lecture Notes in Computer Science. vol 13309. Springer, Cham (2022). https://doi.org/10.1007/978-3-031-05039-8_28

  44. Romero, C., Ventura, S.: Educational data mining and learning analytics: an updated survey. Wiley Interdisc. Rev. Data Min. Knowl. Discov. 10(3), e1355 (2020)

    Article  Google Scholar 

  45. Ruffaldi, E., Filippeschi, A., Avizzano, C.A., Bardy, B., Gopher, D., Bergamasco, M.: Feedback, affordances, and accelerators for training sports in virtual environments. Presence: Teleoperators Virtual Environ. 20(1), 33–46 (2011)

    Google Scholar 

  46. Schneider, J., Börner, D., Van Rosmalen, P., Specht, M.: Presentation trainer, your public speaking multimodal coach. In: Proceedings of the 2015 ACM on International Conference on Multimodal Interaction, pp. 539–546 (2015)

    Google Scholar 

  47. Sebkhi, N., Desai, D., Islam, M., Lu, J., Wilson, K., Ghovanloo, M.: Multimodal speech capture system for speech rehabilitation and learning. IEEE Trans. Biomed. Eng. 64(11), 2639–2649 (2017)

    Article  Google Scholar 

  48. Shukla, S., Shivakumar, A., Vasoya, M., Pei, Y., Lyon, A.F.: iLEAP: a human-AI teaming based mobile language learning solution for dual language learners in early and special educations. In: International Association for Development of the Information Society (2019)

    Google Scholar 

  49. Silva, M.J.: Children using electronic sensors to create and use knowledge on environmental health. First Monday (2020)

    Google Scholar 

  50. Sokolowski, J.A., Garcia, H.M., Richards, W., Banks, C.M.: Developing a low-cost multi-modal simulator for ultrasonography training. In: Proceedings of the Conference on Summer Computer Simulation, pp. 1–5 (2015)

    Google Scholar 

  51. Van Rosmalen, P., Börner, D., Schneider, J., Petukhova, O., Van Helvert, J.: Feedback design in multimodal dialogue systems. In: International Conference on Computer Supported Education (CSEDU). vol. 2, pp. 209–217 (2015)

    Google Scholar 

  52. Volpe, G., et al.: A multimodal corpus for technology-enhanced learning of violin playing. In: Proceedings of the 12th Biannual Conference on Italian SIGCHI Chapter, pp. 1–5 (2017)

    Google Scholar 

  53. Yeom, S., Choi-Lundberg, D.L., Fluck, A.E., Sale, A.: Factors influencing undergraduate students’ acceptance of a haptic interface for learning gross anatomy. Interact. Technol. Smart Educ. 14(1), 50–66 (2017)

    Google Scholar 

  54. Yiannoutsou, N., Johnson, R., Price, S.: Exploring how children interact with 3D shapes using haptic technologies. In: Proceedings of the 17th ACM Conference on Interaction Design and Children, pp. 533–538 (2018)

    Google Scholar 

  55. Yu, Z., Li, X., Niu, X., Shi, J., Zhao, G.: AutoHR: a strong end-to-end baseline for remote heart rate measurement with neural searching. IEEE Sig. Process. Lett. 27, 1245–1249 (2020)

    Article  Google Scholar 

Download references

Acknowledgements

This work was supported by Leiden-Delft-Erasmus Centre for Education and Learning (LDE-CEL).

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Yoon Lee .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2023 The Author(s), under exclusive license to Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Lee, Y., Limbu, B., Rusak, Z., Specht, M. (2023). Role of Multimodal Learning Systems in Technology-Enhanced Learning (TEL): A Scoping Review. In: Viberg, O., Jivet, I., Muñoz-Merino, P., Perifanou, M., Papathoma, T. (eds) Responsive and Sustainable Educational Futures. EC-TEL 2023. Lecture Notes in Computer Science, vol 14200. Springer, Cham. https://doi.org/10.1007/978-3-031-42682-7_12

Download citation

  • DOI: https://doi.org/10.1007/978-3-031-42682-7_12

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-031-42681-0

  • Online ISBN: 978-3-031-42682-7

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics