Detecting Unusual Human Activities Using GPU-Enabled Neural Network and Kinect Sensors
- 3.9k Downloads
Abstract
Graphic Processing Units (GPU) and kinetic sensors are promising devices of Internet of Things (IoT) computing environments in various application domains, including mobile healthcare. In this chapter a novel training/testing process for building/testing a classification model for unusual human activities (UHA) using ensembles of Neural Networks running on NVIDIA GPUs is proposed. Traditionally, UHA is done by a classifier that learns what activities a person is doing by training with skeletal data obtained from a motion sensor such as Microsoft Kinect [1]. These skeletal data are the spatial coordinates (x, y, z) of different parts of the human body. The numeric information forms time series, temporal records of movement sequences that can be used for training an ensemble of Neural Networks. In addition to the spatial features that describe current positions in the skeletal data, new features called shadow features are used to improve the supervised learning efficiency of the ensemble of Neural Networks running on an NVIDIA GPU card. Shadow features are inferred from the dynamics of body movements, thereby modelling the underlying momentum of the performed activities. They provide extra dimensions of information for characterizing activities in the classification process and thus significantly improving the accuracy. We show that the accuracy of using a Neural Network as a classifier on a data set with shadow features can still be further increased when more than one Neural Network is used, forming an ensemble of networks. In order to accelerate the processing speed of an ensemble of Neural Networks, the model proposed is designed and optimized to run on NIVDIA GPUs with CUDA.
Keywords
Unusual human activities Neural network Machine learning GPU Classification Healthcare Internet of ThingsNotes
Acknowledgments
The authors are thankful for the financial support from the Research Grant called “A scalable data stream mining methodology: stream-based holistic analytics and reasoning in parallel”, Grant no. FDCT-126/2014/A3, offered by the University of Macau, FST, RDAO and the FDCT of Macau SAR government. The work of D. Korzun is financially supported by Russian Fund for Basic Research (RFBR) according to research project # 16-07-01289.
References
- 1.Suvagiya, P.H., Bhatt, C.M., Patel, R.P.: Indian sign language translator using Kinect. In: Proceedings of International Conference on ICT for Sustainable Development, Vol. 2, February 2016, pp. 15–23 (2016)Google Scholar
- 2.Kim, E., Hetal, S., Cook, D.: Unusual human activities and pattern discovery. IEEE Pervasive Comput. 9(1), 48–53 (2010)CrossRefGoogle Scholar
- 3.Lara, O.D., Labrador, M.A.: A survey on human activity recognition using wearable sensors. IEEE Commun. Surv. Tutor. 15(3), 1192–1209 (2011)CrossRefGoogle Scholar
- 4.Tapia, E.M., Intille, S.S., Larson, K.: Activity recognition in the house using simple and ubiquitous sensors, LNCS. Pervasive Comput. 3001, 158–175 (2002)Google Scholar
- 5.Max, A.B., Blanke, U., Schiele, B.: A tutorial on unusual human activities using body-worn inertial sensors. ACM Comput. Surv. 46(3), Article No. 33 (2014)Google Scholar
- 6.Leo, M., D’Orazio, T., Spagnolo, P.: Human activity recognition for automatic visual surveillance of wide areas. Proceedings of the ACM 2nd International Workshop on Video Surveillance and Sensor Networks, pp. 124–130. ACM, New York, NY, USA (2004)CrossRefGoogle Scholar
- 7.Chan, J.H., Visutarrom, T., Cho, S.-B., Engchuan, W., Mongolnam, P., Fong, S.: A hybrid approach to human posture classification during TV watching. J. Med. Imaging Health Inf. (American Scientific Publishers). ISSN: 2156-7018 (Accepted for publication) (2016)Google Scholar
- 8.Song, W., Lu, Z., Li, J., Li, J., Liao, J., Cho, K., Um, K.: Hand Gesture Detection and Tracking Methods Based on Background Subtraction, Future Information Technology. Lecture Notes in Electrical Engineering, Vol. 309, pp. 485–490 (2014)Google Scholar
- 9.Kim, Y., Sim, S., Cho, S., Lee, W., Jeong, Y.-S., Cho, K., Um, K.: Intuitive NUI for Controlling Virtual Objects Based on Hand Movements, Future Information Technology. Lecture Notes in Electrical Engineering, Vol. 309, pp. 457–461 (2014)Google Scholar
- 10.Mantyjarvi, J., Res, N., Center, F., Himberg, J., Seppanen, T.: Recognizing human motion with multiple acceleration sensors. IEEE Int. Conf. Syst. Man Cybernet. 2(2001), 747–752 (2001)Google Scholar
- 11.Yang, J.: Towards physical activity diary: motion recognition using simple acceleration features with mobile phones. In: IMCE International Workshop on Interactive Multimedia for Consumer Electronics, pp. 1–10 (2009)Google Scholar
- 12.Brito, R., Fong, S., Cho, K., Song, W., Wong, R., Mohammed, S., Fiaidhi, J.: GPU-enabled back-propagation artificial neural network for digit recognition in parallel. J. Supercomput. 1–19 (2016)Google Scholar
- 13.The J. Paul Getty Museum: Photography: Discovery and Invention. ISBN 0-89236-177-8 (1990)Google Scholar
- 14.Vishwakarma, D.K., Rawat, P., Kapoor, R.: Unusual human activities using gabor wavelet transform and Ridgelet transform. In: 3rd International Conference on Recent Trends in Computing 2015 (ICRTC-2015), Vol. 57, pp. 630–636 (2015)Google Scholar
- 15.Hornik, K., Stinchcombe, M., White, H.: Multilayer feedforward networks are universal approximators. Neural Netw. 2(5), 359–366 (1989)Google Scholar
- 16.Hornik, K.: Approximation capabilities of multilayer feedforward networks. Neural Netw. 4(2), 251–257 (1991)Google Scholar
- 17.Svozil, D., Kvasnicka, V., Pospichal, J.: Introduction to multi-layer feed-forward neural networks. Chemometr. Intell. Lab. Syst. 39(1), 43–62 (1997)Google Scholar
- 18.Jeng, J.J., Li, W.: Feedforward backpropagation artificial neural networks on reconfigurable meshes. Future Gen. Comput. Syst. 14(5–6), 313–319 (1998)Google Scholar
- 19.Rudolph, G.L., Martinez, T.R.: A transformation strategy for implementing distributed, multi-layer feed-forward neural networks: backpropagation transformation. Future Gen. Comput. Syst. 12(6), 547–564 (1997)Google Scholar