Affordances for Capturing and Re-enacting Expert Performance with Wearables

  • Will Guest
  • Fridolin Wild
  • Alla Vovk
  • Mikhail Fominykh
  • Bibeg Limbu
  • Roland Klemke
  • Puneet Sharma
  • Jaakko Karjalainen
  • Carl Smith
  • Jazz Rasool
  • Soyeb Aswat
  • Kaj Helin
  • Daniele Di Mitri
  • Jan Schneider
Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 10474)

Abstract

The WEKIT.one prototype is a platform for immersive procedural training with wearable sensors and Augmented Reality. Focusing on capture and re-enactment of human expertise, this work looks at the unique affordances of suitable hard- and software technologies. The practical challenges of interpreting expertise, using suitable sensors for its capture and specifying the means to describe and display to the novice are of central significance here. We link affordances with hardware devices, discussing their alternatives, including Microsoft Hololens, Thalmic Labs MYO, Alex Posture sensor, MyndPlay EEG headband, and a heart rate sensor. Following the selection of sensors, we describe integration and communication requirements for the prototype. We close with thoughts on the wider possibilities for implementation and next steps.

Keywords

Affordances Augmented reality Wearable technologies Capturing expertise 

References

  1. 1.
  2. 2.
    Gibson, J.J.: The theory of affordances. In: Shaw, R., Bransford, J. (eds.) Perceiving, Acting, and Knowing: Toward an Ecological Psychology, pp. 67–82 (1977)Google Scholar
  3. 3.
    Rizzo, A.: The origin and design of intentional affordances. In: Proceedings 6th Conference Designing Interactive Systems, University Park, PA, USA, pp. 239–240. ACM, New York (2006)Google Scholar
  4. 4.
    Ericsson, K.A., Smith, J.: Prospects and limits of the empirical study of expertise: an introduction. In: Ericsson, K.A., Smith, J. (eds.) Toward a General Theory of Expertise: Prospects and Limits, pp. 1–39. Cambridge University Press, Cambridge (1991)Google Scholar
  5. 5.
    Newell, A., Simon, H.A.: Human Problem Solving. Prentice Hall, Englewood Cliffs (1972)Google Scholar
  6. 6.
    Fominykh, M., Wild, F., Smith, C., Alvarez, V., Morozov, M.: An overview of capturing live experience with virtual and augmented reality. In: Preuveneers, D. (ed.) Workshop Proceedings of the 11th International Conference on Intelligent Environments, pp. 298–305. IOS Press, Amsterdam (2015)Google Scholar
  7. 7.
    Limbu, B., Fominykh, M., Klemke, R., Specht, M., Wild, F.: Supporting training of expertise with wearable technologies: the WEKIT reference framework. In: The International Handbook of Mobile and Ubiquitous Learning. Springer, New York (2017)Google Scholar
  8. 8.
    Bacca, J., Baldiris, S., Fabregat, R., Graf, S.: Kinshuk: augmented reality trends in education: a systematic review of research and applications. Educ. Technol. Soc. 17(4), 133–149 (2014)Google Scholar
  9. 9.
    Bower, M., Sturman, D.: What are the educational affordances of wearable technologies? Comput. Educ. 88, 343–353 (2015)CrossRefGoogle Scholar
  10. 10.
    Wagner, R.K., Sternberg, R.J.: Practical intelligence in real-world pursuits: the role of tacit knowledge. J. Pers. Soc. Psychol. 49(2), 436–458 (1985)CrossRefGoogle Scholar
  11. 11.
    Wei, Y., Yan, H., Bie, R., Wang, S., Sun, L.: Performance monitoring and evaluation in dance teaching with mobile sensing technology. Pers. Ubiquit. Comput. 18(8), 1929–1939 (2014)CrossRefGoogle Scholar
  12. 12.
    Li, H., Lu, M., Chan, G., Skitmore, M.: Proactive training system for safe and efficient precast installation. Autom. Constr. 49(Part A), 163–174 (2015)CrossRefGoogle Scholar
  13. 13.
    Prabhu, V.A., Elkington, M., Crowley, D., Tiwari, A., Ward, C.: Digitisation of manual composite layup task knowledge using gaming technology. Compos. Part B: Eng. 112, 314–326 (2017)CrossRefGoogle Scholar
  14. 14.
    Jang, S.-A., Kim, H.-i., Woo, W., Wakefield, G.: AiRSculpt: a wearable augmented reality 3D sculpting system. In: Streitz, N., Markopoulos, P. (eds.) DAPI 2014. LNCS, vol. 8530, pp. 130–141. Springer, Cham (2014). doi:10.1007/978-3-319-07788-8_13 CrossRefGoogle Scholar
  15. 15.
    Meleiro, P., Rodrigues, R., Jacob, J., Marques, T.: Natural user interfaces in the motor development of disabled children. Procedia Technol. 13, 66–75 (2014)CrossRefGoogle Scholar
  16. 16.
    Araki, A., Makiyama, K., Yamanaka, H., Ueno, D., Osaka, K., Nagasaka, M., Yamada, T., Yao, M.: Comparison of the performance of experienced and novice surgeons. Surg. Endosc. 31(4), 1999–2005 (2017)CrossRefGoogle Scholar
  17. 17.
    Asadipour, A., Debattista, K., Chalmers, A.: Visuohaptic augmented feedback for enhancing motor skills acquisition. Vis. Comput. 33(4), 401–411 (2017)CrossRefGoogle Scholar
  18. 18.
    Kim, S., Dey, A.K.: Augmenting human senses to improve the user experience in cars. Multimed. Tools Appl. 75(16), 9587–9607 (2016)CrossRefGoogle Scholar
  19. 19.
    Ke, F., Lee, S., Xu, X.: Teaching training in a mixed-reality integrated learning environment. Comput. Hum. Behav. 62, 212–220 (2016)CrossRefGoogle Scholar
  20. 20.
    Duente, T., Pfeiffer, M., Rohs, M.: On-skin technologies for muscle sensing and actuation. In: Proceedings of UbiComp 2016, pp. 933–936. ACM, New York (2016)Google Scholar
  21. 21.
    Kwon, Y., Lee, S., Jeong, J., Kim, W.: HeartiSense: a novel approach to enable effective basic life support training without an instructor. In: CHI 2014 Extended Abstracts on Human Factors in Computing Systems, pp. 431–434. ACM, New York (2014)Google Scholar
  22. 22.
    Benedetti, F., Catenacci Volpi, N., Parisi, L., Sartori, G.: Attention training with an easy–to–use brain computer interface. In: Shumaker, R., Lackey, S. (eds.) VAMR 2014. LNCS, vol. 8526, pp. 236–247. Springer, Cham (2014). doi:10.1007/978-3-319-07464-1_22 Google Scholar
  23. 23.
    Kowalewski, K.-F., Hendrie, J.D., Schmidt, M.W., Garrow, C.R., Bruckner, T., Proctor, T., Paul, S., Adigüzel, D., Bodenstedt, S., Erben, A., Kenngott, H., Erben, Y., Speidel, S., Müller-Stich, B.P., Nickel, F.: Development and validation of a sensor-and expert model-based training system for laparoscopic surgery: the iSurgeon. Surg. Endosc. 31, 1–11 (2016)Google Scholar
  24. 24.
    Sanfilippo, F.: A multi-sensor fusion framework for improving situational awareness in demanding maritime training. Reliabil. Eng. Syst. Saf. 161, 12–24 (2017)CrossRefGoogle Scholar
  25. 25.
    Sharma, P., Wild, F., Klemke, R., Helin, K., Azam, T.: Requirement analysis and sensor specifications: First version. WEKIT, D3.1 (2016)Google Scholar

Copyright information

© Springer International Publishing AG 2017

Authors and Affiliations

  • Will Guest
    • 1
  • Fridolin Wild
    • 1
  • Alla Vovk
    • 1
  • Mikhail Fominykh
    • 2
  • Bibeg Limbu
    • 3
  • Roland Klemke
    • 3
  • Puneet Sharma
    • 4
  • Jaakko Karjalainen
    • 5
  • Carl Smith
    • 6
  • Jazz Rasool
    • 6
  • Soyeb Aswat
    • 7
  • Kaj Helin
    • 5
  • Daniele Di Mitri
    • 3
  • Jan Schneider
    • 3
  1. 1.Oxford Brookes UniversityOxfordUK
  2. 2.Europlan UK Ltd.LondonUK
  3. 3.Open University of the NetherlandsHeerlenNetherlands
  4. 4.University of TromsøTromsøNorway
  5. 5.VTTEspooFinland
  6. 6.RavensbourneLondonUK
  7. 7.MyndplayLondonUK

Personalised recommendations