Abstract
In the life, there are always many objects that are unable to actively contact with us, such as keychains, glasses and mobile phones. In general, they are referred to non-cooperative targets. Non-cooperative targets are often overlooked by users while being hard to find. It will be convenient if we can localize those non-cooperative targets. We propose a non-cooperative target localization system which based on MEMS. We detect the arm posture changes of the user by using the MEMS sensors which embedded in the smart watch. First distinguish the arm motions, identify the final motion, and then perform the localization. There are two essential models in our system. The first step is arm gesture estimation model which based on MESE sensor in smart watch. we first collect the MEMS sensor data from the watch. And then the arm kinematic model and formulate the mathematical relationship between arm degrees of freedom with and the gestures of watch. We compare the results of the four actions which are important in the later model with the Kinect observations. The errors in the space are less than 0.14 m. The second step is non-cooperative target localization model that based on the first step. We use the 5-degrees data of the arm to train the classification model and identify the key actions in the scene. In this step, we estimate the location of non-cooperative targets through the type of interactive actions. To demonstrate the effectiveness of our system, we implement it on tracking keys and mobile phones in practice. The experiments show that the localization accuracy is >83%.
This work is supported by NSFC Grants No.61802299, 61772413, 61672424.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
References
Nelson, E.C., Verhagen, T., Noordzij, M.L.: Health empowerment through activity trackers: an empirical smart wristband study. Comput. Hum. Behav. 62, 364–374 (2016)
Li, S.Y., et al.: An exploration of fall-down detection by smart wristband. Appl. Mech. Mater. 687–691, 805–808 (2014)
Cheng, L., Shum, V., Kuntze, G., et al.: A wearable and flexible Wristband computer for on-body sensing. In: 2011 IEEE Consumer Communications and Networking Conference (CCNC), pp. 860–864. IEEE (2011)
Al-Nasser, K.: Smart watch: US, US8725842 (2014)
Rauschnabel, P.A., Brem, A., Ivens, B.S.: Who will buy smart glasses? Empirical results of two pre market-entry studies on the role of personality in individual awareness and intended adoption of Google Glass wearables. Comput. Hum. Behav. 49(8), 635–647 (2015)
Ji, S., Xu, W., Yang, M., et al.: 3D convolutional neural networks for human action recognition. IEEE Trans. Pattern Anal. Mach. Intell. 35(1), 221 (2013)
Lei, J., Li, G., Zhang, J., Guo, Q., Dan, T.: Continuous action segmentation and recognition using hybrid convolutional neural network-hidden Markov model model. Comput. Vis. IET 10(6), 537–544 (2016)
Martínez, F., Manzanera, A., Romero, E.: Spatio-temporal multi-scale motion descriptor from a spatially-constrained decomposition for online action recognition. Comput. Vis. IET 11(7), 541–549 (2017)
Scovanner, P., Ali, S., Shah, M.: A 3-dimensional sift descriptor and its application to action recognition. In: Proceedings of the 15th ACM International Conference on Multimedia, pp. 357–360 (2007)
Blank, M., Gorelick, L., Shechtman, E., Irani, M., Basri, R.: Actions as space-time shapes. In: Proceedings of the Tenth IEEE International Conference on Computer Vision, pp. 1395–1402, 17–20 October 2005 (2005)
Csurka, G., et al.: Visual categorization with bags of keypoints. In: ECCV (2004)
Ogata, M., Imai, M.: SkinWatch: skin gesture interaction for smart watch. In: Proceedings of the 6th Augmented Human International Conference, pp. 21–24. ACM (2015)
Laput, G., Xiao, R., Chen, X.A., Hudson, S.E., Harrison, C.: Skin buttons: cheap, small, low-powered and clickable fixed-icon laser projectors. In: Proceedings of the 27th Annual ACM Symposium on User Interface Software and Technology, Honolulu, Hawaii, USA, 05–08 October 2014 (2014)
Weigel, M., Mehta, V., Steimle, J.: More than touch: understanding how people use skin as an input surface for mobile computing. In: Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, Toronto, Ontario, Canada, 26 April–01 May 2014 (2014)
Parate, A., Chiu, M.C., Chadowitz, C., et al.: RisQ: recognizing smoking gestures with inertial sensors on a wristband. In: International Conference on Mobile Systems, MobiSys, p. 149 (2014)
Xu, C., Pathak, P.H., Mohapatra, P.: Finger-writing with smartwatch: a case for finger and hand gesture recognition using smartwatch. In: International Workshop on Mobile Computing Systems and Applications, pp. 9–14. ACM (2015)
Komninos, A., Dunlop, M.: Text input on a smart watch. IEEE Pervasive Comput. 13(4), 50–58 (2014)
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2019 ICST Institute for Computer Sciences, Social Informatics and Telecommunications Engineering
About this paper
Cite this paper
Lin, L., Yang, H., Liu, Y., Zheng, H., Zhao, J. (2019). Toward Detection of Driver Drowsiness with Commercial Smartwatch and Smartphone. In: Li, Q., Song, S., Li, R., Xu, Y., Xi, W., Gao, H. (eds) Broadband Communications, Networks, and Systems. Broadnets 2019. Lecture Notes of the Institute for Computer Sciences, Social Informatics and Telecommunications Engineering, vol 303. Springer, Cham. https://doi.org/10.1007/978-3-030-36442-7_15
Download citation
DOI: https://doi.org/10.1007/978-3-030-36442-7_15
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-030-36441-0
Online ISBN: 978-3-030-36442-7
eBook Packages: Computer ScienceComputer Science (R0)