Multimodal Tutor for CPR

Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 10948)


This paper describes the design of an intelligent Multimodal Tutor for training people to perform cardiopulmonary resuscitation using patient manikins (CPR tutor). The tutor uses a multi-sensor setup for tracking the CPR execution and generating personalised feedback, including unobtrusive vibrations and retrospective summaries. This study is the main experiment of a PhD project focusing on multimodal data support for investigating practice-based learning scenarios, such as psychomotor skills training in the classroom or at the workplace. For the CPR tutor the multimodal data considered consist of trainee’s body position (with Microsoft Kinect), electromyogram (with Myo armband) and compression rates data derived from the manikin. The CPR tutor uses a new technological framework, the Multimodal Pipeline, which motivates a set of technical approaches used for the data collection, storage, processing, annotation and exploitation of multimodal data. This paper aims at opening up the motivation, the planning and expected evaluations of this experiment to further feedback and considerations by the scientific community.


Multimodal Data Psychomotor Skills Training Patient Manikin Retrospective Summary Affective Learning Companions 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.


  1. 1.
    Blikstein, P.: Multimodal learning analytics. In: Proceedings of the Third International Conference on Learning Analytics and Knowledge - LAK 2013, p. 102 (2013)Google Scholar
  2. 2.
    Burleson, W.: Affective learning companions. Educ. Technol. 47(1), 28 (2007)Google Scholar
  3. 3.
    Di Mitri, D., Scheffel, M., Drachsler, H., Börner, D., Ternier, S., Specht, M.: Learning pulse: a machine learning approach for predicting performance in self-regulated learning using multimodal data. In: LAK 2017 Proceedings of the 7th International Conference on Learning Analytics and Knowledge, pp. 188–197 (2017)Google Scholar
  4. 4.
    Di Mitri, D., Schneider, J., Specht, M., Drachsler, H.: From signals to knowledge. Modeling the emerging research field of multimodal data for learning (2018)Google Scholar
  5. 5.
    D’Mello, S., Jackson, T., Craig, S., Morgan, B., Chipman, P., White, H., Person, N., Kort, B., El Kaliouby, R., Picard, R.W., Graesser, A.: AutoTutor detects and responds to learners affective and cognitive states. IEEE Trans. Educ. 48(4), 612–618 (2008)Google Scholar
  6. 6.
    Ochoa, X., Chiluiza, K., Méndez, G., Luzardo, G., Guamán, B., Castells, J.: Expertise estimation based on simple multimodal features. In: Proceedings of the 15th ACM International Conference on Multimodal Iteraction (ICMI 2013), pp. 583–590 (2013)Google Scholar
  7. 7.
    Schneider, J., Börner, D., van Rosmalen, P., Specht, M.: Augmenting the senses: a review on sensor-based learning support. Sensors 15(2), 4097–4133 (2015)CrossRefGoogle Scholar
  8. 8.
    Worsley, M.: Multimodal learning analytics as a tool for bridging learning theory and complex learning behaviors. In: 3rd Multimodal Learning Analytics Workshop and Grand Challenges, MLA 2014, pp. 1–4 (2014)Google Scholar
  9. 9.
    Worsley, M., Blikstein, P.: Towards the development of multimodal action based assessment. In: Proceedings of the Third International Conference on Learning Analytics and Knowledge - LAK 2013, pp. 94–101 (2013)Google Scholar

Copyright information

© Springer International Publishing AG, part of Springer Nature 2018

Authors and Affiliations

  1. 1.Welten Institute, Research Centre for Learning, Teaching and TechnologyOpen University of the NetherlandsHeerlenNetherlands

Personalised recommendations