Abstract
The authors have proposed an idea of augmented tele-presence system called ARM-COMS (ARm-supported eMbodied COmmunication Monitor System), which detects the orientation of a subject face by the face-detection tool based on an image processing technique, and mimics the head motion of a remote person to behave as if an avatar does during video communication. In addition to that, ARM-COMS makes appropriate reactions by audio signals during talk when a remote person speaks even without any significant motion in video communication. Based on the basic idea of ARM-COMS, this study focuses on AI speaker technology that answers questions in a natural language, and works on the development of an integrated system with health monitoring with ARM-COMS. This paper describes the prototype system developed in this study, shows some of the basic functions implemented in the prototype, and discusses how the combination of AI technology with ARM-COMS could work based on the experimental findings.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
References
Bertrand, C., Bourdeau, L.: Research interviews by Skype: a new data collection method. In: Esteves, J. (Ed.) Proceedings from the 9th European Conference on Research Methods, pp. 70–79. IE Business School, Spain (2010)
Civan, A., Skeels, M.M., Stolyar, A., et al.: Personal health information management: consumers’ perspectives. AMIA Ann. Symp. Proc. 2006, 156–160 (2006)
Ekman, P., Friesen, W.V.: The repertoire or nonverbal behavior: categories, origins, usage, and coding. Semiotica 1, 49–98 (1969)
Gerkey, B., Smart, W., Quigley, M.: Programming Robots with ROS. O’Reilly Media, Sebastopol (2015)
Ito, T., Watanabe, T.: Motion control algorithm of ARM-COMS for entrainment enhancement. In: Yamamoto, S. (ed.) HIMI 2016. LNCS, vol. 9734, pp. 339–346. Springer, Cham (2016). https://doi.org/10.1007/978-3-319-40349-6_32
Kumar, A., Haider, Y., Kumar, M., et al.: Using whatsapp as a quick-access personal logbook for maintaining clinical records and follow-up of orthopedic patients. Cureus 13(1), e12900 (2021). https://doi.org/10.7759/cureus.12900
Lee, A., Kawahara, T.: Recent development of open-source speech recognition engine julius. In: Asia-Pacific Signal and Information Processing Association Annual Summit and Conference (APSIPA ASC) (2009)
Lokshina, I., Lanting, C.: A qualitative evaluation of iot-driven ehealth: knowledge management, business models and opportunities, deployment and evolution. In: Kryvinska, N., Greguš, M. (eds.) Data-Centric Business and Applications. LNDECT, vol. 20, pp. 23–52. Springer, Cham (2019). https://doi.org/10.1007/978-3-319-94117-2_2
Medical Alert Advice. www.medicalalertadvice.com. Accessed 23 Feb 2021
Muramatsu, N., Akiyama, H.: Japan: super-aging society preparing for the future. Gerontologist 51(4), 425–432 (2011). https://doi.org/10.1093/geront/gnr067
Schoff, F., Kalenichenko, D., Philbin, J.: FaceNet: a unified embedding for face recognition and clustering. In: IEEE Conferences on CVPR 2015, pp. 815–823 (2015)
Society 5.0. https://www.japan.go.jp/abenomics/_userdata/abenomics/pdf/society_5.0.pdf. Accessed 23 Feb 2021
Stephen, J.: Understanding body language: birdwhistell’s theory of kinesics. Corporate Communications an International Journal, September 2000. https://doi.org/10.1108/13563280010377518
Ovadia, S.: Automate the internet with “If This Then That” (IFTTT). Behav. Soc. Sci. Librarian 33(4), 208–211 (2014). https://doi.org/10.1080/01639269.2014.964593
Sudharsan, B., Kumar, S.P., Dhakshinamurthy, R.: AI vision: smart speaker design and implementation with object detection custom skill and advanced voice interaction capability. In: 2019 11th International Conference on Advanced Computing (ICoAC), pp. 97–102. Chennai, India (2019). https://doi.org/10.1109/icoac48765.2019.247125
Watanabe, T.: Human-entrained embodied interaction and communication technology. In: Fukuda, S. (eds) Emotional Engineering, pp. 161–177. Springer, London (2011). https://doi.org/10.1007/978-1-84996-423-4_9
Web Speech API. https://developer.mozilla.org/en-US/docs/Web/API/Web_Speech_API. Accessed 23 Feb 2021
Acknowledgement
This work was partly supported by JSPS KAKENHI Grant Numbers JP19K12082 and Original Research Grant 2020 of Okayama Prefectural University. The author would like to acknowledge Kengo Sadakane for implementing the basic modules, and all members of Kansei Information Engineering Labs at Okayama Prefectural University for their cooperation to conduct the experiments.
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2021 Springer Nature Switzerland AG
About this paper
Cite this paper
Ito, T., Oyama, T., Watanabe, T. (2021). Smart Speaker Interaction Through ARM-COMS for Health Monitoring Platform. In: Yamamoto, S., Mori, H. (eds) Human Interface and the Management of Information. Information-Rich and Intelligent Environments. HCII 2021. Lecture Notes in Computer Science(), vol 12766. Springer, Cham. https://doi.org/10.1007/978-3-030-78361-7_30
Download citation
DOI: https://doi.org/10.1007/978-3-030-78361-7_30
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-030-78360-0
Online ISBN: 978-3-030-78361-7
eBook Packages: Computer ScienceComputer Science (R0)