Advertisement

Sensors for Seamless Learning

  • Marcus SpechtEmail author
  • Limbu Bibeg Hang
  • Jan Schneider Barnes
Chapter
Part of the Lecture Notes in Educational Technology book series (LNET)

Abstract

The chapter highlights the role of sensors for supporting seamless learning experiences. In the first part, the relation between sensor tracking of learning activities and research around real-time feedback in educational situations is introduced. The authors present an overview of the kinds of sensor data that have been used for educational purposes in the literature. Secondly, the authors introduce the link between sensor data and educational interventions, and especially the role of building expert models from real-world expert tracking. The third part of the paper illustrates how educational AR applications have used sensor data for different forms of learning support. The authors present 15 design patterns that have been implemented in different educational AR applications that build on our analysis of sensor tracking. For future AR applications, the authors propose that the use of sensors for building expert performance models is essential for a variety of educational interventions.

References

  1. Ahmmad, S. N. Z., Ming, E. S. L., Fai, Y. C., & Narayanan, A. L. T. (2014). Experimental study of Surgeon’s psychomotor skill using sensor-based measurement. Procedia Computer Science, 42(C), 130–137.  https://doi.org/10.1016/j.procs.2014.11.043.CrossRefGoogle Scholar
  2. Araki, A., Makiyama, K., Yamanaka, H., Ueno, D., Osaka, K., Nagasaka, M., … Yao, M. (2016). Comparison of the performance of experienced and novice surgeons: Measurement of gripping force during laparoscopic surgery performed on pigs using forceps with pressure sensors. Surgical Endoscopy, 31(4), 1999–2005.  https://doi.org/10.1007/s00464-016-5153-x.CrossRefGoogle Scholar
  3. Asadipour, A., Debattista, K., & Chalmers, A. (2017). Visuohaptic augmented feedback for enhancing motor skills acquisition. Visual Computer, 33(4), 401–411.  https://doi.org/10.1007/s00371-016-1275-3.CrossRefGoogle Scholar
  4. Benedetti, F., Volpi, N. C., Parisi, L., & Sartori, G. (2014). Attention training with an Easy–to–Use brain computer interface. In Virtual, augmented and mixed reality: Applications of virtual and augmented reality (Vol. 8526, pp. 236–247).  https://doi.org/10.1007/978-3-319-07464-1_22.Google Scholar
  5. Börner, D., Kalz, M., & Specht, M. (2013). Beyond the channel: A literature review on ambient displays for learning. Computers & Education, 60(1), 426–435.  https://doi.org/10.1016/j.compedu.2012.06.010.CrossRefGoogle Scholar
  6. Chia, F.-Y., & Saakes, D. (2014). Interactive training chopsticks to improve fine motor skills. In ACE’14 Proceedings of the 11th Conference on Advances in Computer Entertainment Technology Article No. 57 (pp. 1–4).  https://doi.org/10.1145/2663806.2663816.
  7. Collins, A., Brown, J. S., & Holum, A. (1991). Cognitive apprenticeship: Making thinking visible. American Educator, 15(3), 6–11. Retrieved from https://eric.ed.gov/?id=EJ440511.
  8. Daponte, P., De Vito, L., Riccio, M., & Sementa, C. (2014). Design and validation of a motion-tracking system for ROM measurements in home rehabilitation. Measurement: Journal of the International Measurement Confederation, 55, 82–96.  https://doi.org/10.1016/j.measurement.2014.04.021.CrossRefGoogle Scholar
  9. Idol, L., & Jones, B. F. (2013). Educational values and cognitive instruction: Implications for reform. New York: Routledge.CrossRefGoogle Scholar
  10. Jang, S. A., Kim, H.-I., Woo, W., & Wakefield, G. (2014). AiRSculpt: A wearable augmented reality 3D sculpting system. In Distributed, Ambient, and Pervasive Interactions. DAPI 2014. Lecture Notes in Computer Science (Vol. 8530, pp. 130–141).  https://doi.org/10.1007/978-3-319-07788-8_13.CrossRefGoogle Scholar
  11. Jarodzka, H., Van Gog, T., Dorr, M., Scheiter, K., & Gerjets, P. (2013). Learning to see: Guiding students’ attention via a Model’s eye movements fosters learning. Learning and Instruction, 25, 62–70.  https://doi.org/10.1016/j.learninstruc.2012.11.004.CrossRefGoogle Scholar
  12. Ke, F., Lee, S., & Xu, X. (2016). Teaching training in a mixed-reality integrated learning environment. Computers in Human Behavior, 62, 212–220.  https://doi.org/10.1016/j.chb.2016.03.094.CrossRefGoogle Scholar
  13. Khan, M. (2015). Transmitting Al Ardha: Traditional Arab sword dance. International Journal of Heritage in the Digital Era, 4(1), 71–85.  https://doi.org/10.1260/2047-4970.4.1.71.CrossRefGoogle Scholar
  14. Kim, S. J., & Dey, A. K. (2016). Augmenting human senses to improve the user experience in cars: Applying augmented reality and haptics approaches to reduce cognitive distances. Multimedia Tools and Applications, 75(16), 9587–9607.  https://doi.org/10.1007/s11042-015-2712-4.CrossRefGoogle Scholar
  15. Kowalewski, K. F., Hendrie, J. D., Schmidt, M. W., Garrow, C. R., Bruckner, T., Proctor, T., … Nickel, F. (2016). Development and validation of a sensor- and expert model-based training system for laparoscopic surgery: The iSurgeon. Surgical Endoscopy, 31(5), 2155–2165.  https://doi.org/10.1007/s00464-016-5213-2.CrossRefGoogle Scholar
  16. Li, H., Lu, M., Chan, G., & Skitmore, M. (2015). Proactive training system for safe and efficient precast installation. Automation in Construction, 49(Part A), 163–174.  https://doi.org/10.1016/j.autcon.2014.10.010.CrossRefGoogle Scholar
  17. Limbu, B., Rasool, J., & Klemke, R. (2016). WEKIT D1.3 WEKIT framework & training methodology. Retrieved from http://wekit.eu/wp-content/uploads/2016/07/WEKIT_D1.3.pdf.
  18. Limbu, B., Jarodzka, H., Specht, M., & Klemke, R. (2018). Using sensors and augmented reality to train apprentices using recorded expert performance: A systematic literature review. Educational Research Review, 25, 1–22.CrossRefGoogle Scholar
  19. Limbu, B., Fominykh, M., Klemke, R., Specht, M., & Wild, F. (2018). Supporting training of expertise with wearable technologies: The WEKIT reference framework. In Mobile and ubiquitous learning (pp. 157–175). Singapore: Springer.  https://doi.org/10.1007/978-981-10-6144-8_10.Google Scholar
  20. Meleiro, P., Rodrigues, R., Jacob, J., & Marques, T. (2014). Natural user interfaces in the motor development of disabled children. Procedia Technology, 13, 66–75.  https://doi.org/10.1016/j.protcy.2014.02.010.CrossRefGoogle Scholar
  21. Oppermann, R., & Specht, M. (2006). Situated learning in the process of work. In Engaged learning with emerging technologies.  https://doi.org/10.1007/1-4020-3669-8_4.
  22. Prabhu, V. A., Elkington, M., Crowley, D., Tiwari, A., & Ward, C. (2017). Digitisation of manual composite layup task knowledge using gaming technology. Composites Part B Engineering, 112, 314–326.  https://doi.org/10.1016/j.compositesb.2016.12.050.CrossRefGoogle Scholar
  23. Sanfilippo, F. (2017). A multi-sensor fusion framework for improving situational awareness in demanding maritime training. Reliability Engineering & System Safety, 161(February 2016), 12–24.  https://doi.org/10.1016/j.ress.2016.12.015.CrossRefGoogle Scholar
  24. Schneider, J. (2017). Sensor-based learning support. Open Universiteit. Retrieved from http://dspace.ou.nl/handle/1820/8782.
  25. Schneider, J., Börner, D., van Rosmalen, P., & Specht, M. (2014). Presentation trainer: A study on immediate feedback for developing non-verbal public speaking skills. Bulletin of the Technical Committee on Learning Technology, 16(2–3), 6–9.Google Scholar
  26. Schneider, J., Börner, D., van Rosmalen, P., & Specht, M. (2015). Augmenting the senses: A review on sensor-based learning support. Sensors, 15(2), 4097–4133.  https://doi.org/10.3390/s150204097.CrossRefGoogle Scholar
  27. Schneider, J., Börner, D., van Rosmalen, P., & Specht, M. (2015). Stand tall and raise your voice! A study on the presentation trainer. Lecture notes in computer science (including subseries Lecture notes in artificial intelligence and Lecture notes in bioinformatics) (Vol. 9307).  https://doi.org/10.1007/978-3-319-24258-3_23.CrossRefGoogle Scholar
  28. Schneider, J., Börner, D., van Rosmalen, P., & Specht, M. (2015a). Augmenting the senses: A review on sensor-based learning support. Sensors (Switzerland), 15(2).  https://doi.org/10.3390/s150204097.CrossRefGoogle Scholar
  29. Schneider, J., Börner, D., van Rosmalen, P., & Specht, M. (2015b). Presentation trainer, your public speaking multimodal coach. In ICMI 2015—Proceedings of the 2015 ACM International Conference on Multimodal Interaction.  https://doi.org/10.1145/2818346.2830603.
  30. Schneider, J., Borner, D., van Rosmalen, P., & Specht, M. (2016). Can you help me with my pitch? Studying a tool for real-time automated feedback. IEEE Transactions on Learning Technologies, 9(4).  https://doi.org/10.1109/TLT.2016.2627043.CrossRefGoogle Scholar
  31. Schneider, J., Börner, D., van Rosmalen, P., & Specht, M. (2017). Presentation trainer: What experts and computers can tell about your nonverbal communication. Journal of Computer Assisted learning, 33(2), 164–177.  https://doi.org/10.1111/jcal.12175.CrossRefGoogle Scholar
  32. Specht, M. (2009). Learning in a technology enhanced world. Context in ubiquitous learning support. Heerlen: Open Universiteit Nederland. Retrieved from http://hdl.handle.net/1820/2034.
  33. Specht, M. (2015). Connecting learning contexts with ambient information channels. Seamless learning in the age of mobile connectivity.  https://doi.org/10.1007/978-981-287-113-8_7.Google Scholar
  34. Specht, M., Ternier, S., & Greller, W. (2011). Mobile augmented reality for learning: A case study. Journal of the Research Center for Educational Technology, 7(1), 117–127. Retrieved from http://rcetj.org/index.php/rcetj/article/view/151.
  35. Specht, M., Börner, D., & Tabuenca, B. (2012). RTST trend report: Lead theme contextualisation. Retrieved from http://dspace.ou.nl/handle/1820/4356.
  36. Sun, X., Byrns, S., Cheng, I., Zheng, B., & Basu, A. (2017). Smart sensor-based motion detection system for hand movement training in open surgery. Journal of Medical Systems, 41(2), 24.  https://doi.org/10.1007/s10916-016-0665-4.CrossRefGoogle Scholar
  37. Van Merriënboer, J. J. G., Clark, R. E., & Croock, M. B. M. (2002). Blueprints for complex learning: The 4C/ID-model. Educational Technology Research and Development, 50(2), 39–64.  https://doi.org/10.1007/BF02504993.CrossRefGoogle Scholar
  38. Wagner, R. K., & Sternberg, R. J. (1990). Street smarts. In K. E. Clark & M. B. Clark (Eds.), Measures of leadership (pp. 493–504). West Orange, NJ, USA: Leadership Library of America.Google Scholar
  39. Wei, Y., Yan, H., Bie, R., Wang, S., & Sun, L. (2014). Performance monitoring and evaluation in dance teaching with mobile sensing technology. Personal and Ubiquitous Computing, 18(8), 1929–1939.  https://doi.org/10.1007/s00779-014-0799-7.CrossRefGoogle Scholar
  40. Zimmermann, A., Lorenz, A., & Oppermann, R. (2007). An operational definition of context modeling and using context. In Proceedings of CONTEXT’07 (Vol. 4635, pp. 558–571).  https://doi.org/10.1007/978-3-540-74255-5_42.

Copyright information

© Springer Nature Singapore Pte Ltd. 2019

Authors and Affiliations

  • Marcus Specht
    • 1
    Email author
  • Limbu Bibeg Hang
    • 1
    • 2
  • Jan Schneider Barnes
    • 1
    • 3
  1. 1.Center for Education and LearningTU DelftDelftNetherlands
  2. 2.Welten Institute Open Universiteit NetherlandsHeerlenNetherlands
  3. 3.DIPF German Institute for Pedagogical ResearchFrankfurtGermany

Personalised recommendations