Abstract
Falls are the main cause of increased morbidity and mortality in the elderly. Human activity recognition is a difficult task due to factors such as light changes and complex background. To address these challenges, this paper proposes a method for recognizing falling activity based on the human skeleton and optical flow sequence. First, videos of fall activities and not-fall activities of different testers are captured using the Kinect v2 sensor; then, the captured videos are pre-processed to obtain the skeleton images and grayscale images of each activity; and the optical flow sequences of adjacent images are obtained using the dense optical flow algorithm. This paper proposes a lightweight convolutional neural network for extracting movement features from multimodal data and identifying activities. The decision fusion algorithm combines the fuzzy comprehensive evaluation method and the majority voting strategy to determine the weight values that influence the final decision outcomes. The experimental results show that the accuracy of the method proposed in this paper is up to 95.31%, which has a good effect on the recognition of falling activity.
Similar content being viewed by others
Availability of data and materials
The data that support the findings of this study are not openly available due to human data and are available from the corresponding author upon reasonable request.
References
Alzahrani, M.S., Jarraya, S.K., Ben-Abdallah, H., Ali, M.S.: Comprehensive evaluation of skeleton features-based fall detection from microsoft kinect v2. Signal Image Video Process. 13(7), 1431–1439 (2019)
Berger, K., Meister, S., Nair, R., Kondermann, D.: A state of the art report on kinect sensor setups in computer vision. In: Time-of-Flight and Depth Imaging. Sensors, Algorithms, and Applications, pp. 257–272. Springer, Berlin (2013)
Chen, W., Jiang, Z., Guo, H., Ni, X.: Fall detection based on key points of human-skeleton using openpose. Symmetry 12(5), 744 (2020)
Gasparrini, S., Cippitelli, E., Spinsante, S., Gambi, E.: A depth-based fall detection system using a kinect® sensor. Sensors 14(2), 2756–2775 (2014)
Gillespie, L.D., Robertson, M.C., Gillespie, W.J., Sherrington, C., Gates, S., Clemson, L., Lamb, S.E.: Interventions for preventing falls in older people living in the community. Cochrane Database Syst. Rev. (2012). https://doi.org/10.1002/14651858.CD007146.pub3
Guffanti, D., Brunete, A., Hernando, M., Rueda, J., Cabello, E.N.: The accuracy of the microsoft kinect v2 sensor for human gait analysis. A different approach for comparison with the ground truth. Sensors 20(16), 4405 (2020)
Guo, M., Wang, Z., Yang, N., Li, Z., An, T.: A multisensor multiclassifier hierarchical fusion model based on entropy weight for human activity recognition using wearable inertial sensors. IEEE Trans. Hum. Mach. Syst. 49(1), 105–111 (2018)
Gutiérrez, J., Rodríguez, V., Martin, S.: Comprehensive review of vision-based fall detection systems. Sensors 21(3), 947 (2021)
Han, J., Shao, L., Dong, X., Shotton, J.: Enhanced computer vision with microsoft kinect sensor: a review. IEEE Trans. Cybern. 43(5), 1318–1334 (2013)
Ilg, E., Mayer, N., Saikia, T., Keuper, M., Dosovitskiy, A., Brox, T.: Flownet 2.0: evolution of optical flow estimation with deep networks. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 2462–2470 (2017)
Jonschkowski, R., Stone, A., Barron, J.T., Gordon, A., Konolige, K., Angelova, A.: What matters in unsupervised optical flow. In: European Conference on Computer Vision, pp. 557–572. Springer (2020)
Kawatsu, C., Li, J., Chung, C.J.: Development of a fall detection system with microsoft kinect. In: Robot Intelligence Technology and Applications 2012, pp. 623–630. Springer, Berlin (2013)
Lin, J., Gan, C., Han, S.: Tsm: temporal shift module for efficient video understanding. In Proceedings of the IEEE/CVF International Conference on Computer Vision, pp. 7083–7093 (2019)
Luštrek, M., Kaluža, B.: Fall detection and activity recognition with machine learning. Informatica 33(2), 205–212 (2009)
Mansoor, M., Amin, R., Mustafa, Z., Sengan, S., Aldabbas, H., Alharbi, M.T.: A machine learning approach for non-invasive fall detection using kinect. Multimed. Tools Appl. 81(11), 15491–15519 (2022)
Mendes, L.P.N., Ricardo, A., Bernardino, A.J.M., Ferreira, R.M.L.: A comparative study of optical flow methods for fluid mechanics. Exp. Fluids 63(1), 1–26 (2022)
Mubashir, M., Shao, L., Seed, L.: A survey on fall detection: principles and approaches. Neurocomputing 100, 144–152 (2013)
Ramirez, H., Velastin, S.A., Meza, I., Fabregas, E., Makris, D., Farias, G.: Fall detection and activity recognition using human skeleton features. IEEE Access 9, 33532–33542 (2021)
Ren, L., Peng, Y.: Research of fall detection and fall prevention technologies: a systematic review. IEEE Access 7, 77702–77722 (2019)
Simonyan, K., Zisserman, A.: Two-stream convolutional networks for action recognition in videos. Adv. Neural Inf. Process. Syst. 27 (2014)
Singh, A., Rehman, S.U., Yongchareon, S., Chong, P.H.J.: Sensor technologies for fall detection systems: a review. IEEE Sens. J. 20(13), 6889–6919 (2020)
Su, F., Lu, Q., Luo, R.Z.: Review of image classification based on deep learning. Telecommun. Sci. 35(11), 58–74 (2019)
Sultana, F., Sufian, A., Dutta, P.: A review of object detection models based on convolutional neural network. In: Intelligent Computing: Image Processing Based Applications, pp. 1–16. Springer, Berlin (2020)
Tölgyessy, M., Dekan, M., Chovanec, L.: Skeleton tracking accuracy and precision evaluation of kinect v1, kinect v2, and the azure kinect. Appl. Sci. 11(12), 5756 (2021)
Toreyin, B.U., Soyer, A.B., Onaran, I., Cetin, E.E.: Falling person detection using multi-sensor signal processing. EURASIP J. Adv. Signal Process. 2008, 1–7 (2007)
Vaidehi, V., Ganapathy, K., Mohan, K., Aldrin, A., Nirmal, K.: Video based automatic fall detection in indoor environment. In: 2011 International Conference on Recent Trends in Information Technology (ICRTIT), pp. 1016–1020. IEEE (2011)
Wang, L., Peng, M., Zhou, Q.: Pre-impact fall detection based on multi-source CNN ensemble. IEEE Sens. J. 20(10), 5442–5451 (2020)
Wang, L., Xiong, Y., Zhe Wang, Yu., Qiao, D.L., Tang, X., Van Gool, L.: Temporal segment networks for action recognition in videos. IEEE Trans. Pattern Anal. Mach. Intell. 41(11), 2740–2755 (2018)
Wang, Z., Yang, X.: Moving target detection and tracking based on pyramid Lucas-Kanade optical flow. In: 2018 IEEE 3rd International Conference on Image, Vision and Computing (ICIVC), pp. 66–69. IEEE (2018)
Tao, X., Zhou, Y.: Elders’ fall detection based on biomechanical features using depth camera. Int. J. Wavelets Multiresolut. Inf. Process. 16(02), 1840005 (2018)
Yedjour, H.: Optical flow based on Lucas-Kanade method for motion estimation. In: International Conference in Artificial Intelligence in Renewable Energetic Systems, pp. 937–945. Springer (2020)
Yuan, Z.W., Zhang, J.: Feature extraction and image retrieval based on AlexNet. In: Eighth International Conference on Digital Image Processing (ICDIP 2016), vol. 10033, pp. 65–69. SPIE (2016)
Alom, M.Z., Taha, T.M., Yakopcic, C., Westberg, S., Sidike, P., Nasrin, M.S., Van Esesn, B.C., Awwal, A.A.S., Asari, V.K: The history began from AlexNet: a comprehensive survey on deep learning approaches. arXiv preprint arXiv:1803.01164 (2018)
Zhai, M., Xiang, X., Lv, N., Kong, X.: Optical flow and scene flow estimation: a survey. Pattern Recogn. 114, 107861 (2021)
Zhang, Q., Yang, Q., Zhang, X., Bao, Q., Jinqi, S., Liu, X.: Waste image classification based on transfer learning and convolutional neural network. Waste Manag. 135, 150–157 (2021)
Funding
This work was supported by the National Natural Science Foundation of China under Grant Nos. 61903170, 62173175, and 61877033, by the Natural Science Foundation of Shandong Province under Grant Nos. ZR2019BF045, ZR2019MF021, and ZR2019QF004 and by the Key Research and Development Project of Shandong Province of China, No. 2019GGX101003.
Author information
Authors and Affiliations
Contributions
Yingchan Cao wrote the main manuscript text. All authors reviewed the manuscript.
Corresponding author
Ethics declarations
Conflict of interest
The authors declare no conflict of interest.
Ethical approval
Written informed consent was obtained from all the participants prior to the enrollment (or for the publication) of this study.
Additional information
Publisher's Note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Rights and permissions
Springer Nature or its licensor (e.g. a society or other partner) holds exclusive rights to this article under a publishing agreement with the author(s) or other rightsholder(s); author self-archiving of the accepted manuscript version of this article is solely governed by the terms of such publishing agreement and applicable law.
About this article
Cite this article
Cao, Y., Guo, M., Sun, J. et al. Fall detection based on LCNN and fusion model of weights using human skeleton and optical flow. SIViP 18, 833–841 (2024). https://doi.org/10.1007/s11760-023-02776-9
Received:
Revised:
Accepted:
Published:
Issue Date:
DOI: https://doi.org/10.1007/s11760-023-02776-9