Advertisement

Application of Convolutional Neural Networks for Fall Detection Using Multiple Cameras

Chapter
Part of the Studies in Systems, Decision and Control book series (SSDC, volume 273)

Abstract

Currently one of the most important research issue for artificial intelligence and computer vision tasks is the recognition of human falls. Due to the current exponential increase in the use of cameras is it common to use vision-based approach for fall detection and classification systems. On another hand deep learning algorithms have transformed the way that we see vision-based problems. The Convolutional Neural Network (CNN) as deep learning technique offers more reliable and robust solutions on detection and classification problems. Focusing only on a vision-based approach, for this work we used images from a new public multimodal data set for fall detection (UP-Fall Detection dataset) published by our research team. In this chapter we present fall detection system using a 2D CNN analyzing multiple camera information. This method analyzes images in fixed time window frames extracting features using an optical flow method that obtains information of relative motion between two consecutive images. For experimental results, we tested this approach in UP-Fall Detection dataset. Results showed that our proposed multi-vision-based approach detects human falls achieving 95.64% in accuracy with a simple CNN network architecture compared with other state-of-the-art methods.

Keywords

Deep learning Fall detection system CNN Multiple cameras 

Notes

Conflict of interest

The authors declare that there are no conflicts of interest regarding the publication of this article.

Funding Statement

The authors declare that this work was performed as part of their employment in Universidad Panamericana (Mexico).

References

  1. 1.
    Department of Health and Human Services: Fatalities and injuries from falls among older adults - United States, 1993–2003 and 2001–2005. pp. 1221–1224, November 2006. Morbidity and Mortality Weekly ReportGoogle Scholar
  2. 2.
    Schneider, M.: Introduction to Public Health. Jones and Bartlett, Sudbury, MA (2011)Google Scholar
  3. 3.
    Oneill, T.W., Varlow, J., Silman, A.J., Reeve, J., Reid, D.M., Todd, C., Woolf, A.D.: Age and sex influences on fall characteristics. Ann. Rheum. Dis. 53(11), 773–775 (1994).  https://doi.org/10.1136/ard.53.11.773CrossRefGoogle Scholar
  4. 4.
    Lord, S.R., Sherrington, C., Menz, H.B., Close, J.C. (n.d.): Strategies for prevention. Falls in Older People, 173–176.  https://doi.org/10.1017/cbo9780511722233.011
  5. 5.
    Bourke, A., Lyons, G.: A threshold-based fall-detection algorithm using a bi-axial gyroscope sensor. Med. Eng. Phys. 30(1), 84–90 (2008).  https://doi.org/10.1016/j.medengphy.2006.12.001CrossRefGoogle Scholar
  6. 6.
    Noury, N., Fleury, A., Rumeau, P., Bourke, A., Laighin, G. O., Rialle, V., Lundy, J. (2007). Fall detection - Principles and Methods. 2007 29th Annual International Conference of the IEEE Engineering in Medicine and Biology Society.  https://doi.org/10.1109/iembs.2007.4352627
  7. 7.
    Wu, Y., Su, Y., Hu, Y., Yu, N., Feng, R.: A multi-sensor fall detection system based on multivariate statistical process analysis. J. Med. Biol. Eng. 39(3), 336–351 (2019).  https://doi.org/10.1007/s40846-018-0404-zCrossRefGoogle Scholar
  8. 8.
    Khojasteh, S., Villar, J., Chira, C., González, V., Cal, E.D.: Improving fall detection using an on-wrist wearable accelerometer. Sensors 18(5), 1350 (2018).  https://doi.org/10.3390/s18051350CrossRefGoogle Scholar
  9. 9.
    Mubashir, M., Shao, L., Seed, L.: A survey on fall detection: principles and approaches. Neurocomputing 100, 144–152 (2013).  https://doi.org/10.1016/j.neucom.2011.09.037CrossRefGoogle Scholar
  10. 10.
    Jalal, A., Kamal, S., Kim, D.: A depth video sensor-based life-logging human activity recognition system for elderly care in smart indoor environments. Sensors 14(7), 11735–11759 (2014).  https://doi.org/10.3390/s140711735CrossRefGoogle Scholar
  11. 11.
    Anguita, D., Ghio, A., Oneto, L., Parra, X., Reyes-Ortiz, J.L.: Training computationally efficient smartphone-based human activity recognition models. Lect. Notes Comput. Sci. (including Subser. Lect. Notes Artif. Intell. Lect. Notes Bioinformatics), vol. 8131 LNCS, pp. 426–433 (2013)Google Scholar
  12. 12.
    Jatoba, L.C., Grossmann, U., Kunze, C., Ottenbacher, J., Stork, W.: Context-aware mobile health monitoring: Evaluation of different pattern recognition methods for classification of physical activity. In: 30th Annual International Conference of the IEEE Engineering in Medicine and Biology Society, pp. 5250–5253 (2008)Google Scholar
  13. 13.
    Münzner, S., Schmidt, P., Reiss, A., Hanselmann, M., Stiefelhagen, R., Dürucheb, R.: CNN-based sensor fusion techniques for multimodal human activity recognition. In: Proceedings of the 2017 ACM International Symposium on Wearable Computers - ISWC 17. (2017).  https://doi.org/10.1145/3123021.3123046
  14. 14.
    Bortnikov, M., Khan, A., Khattak, A.M., Ahmad, M.: Accident Recognition via 3D CNNs for Automated Traffic Monitoring in Smart Cities. Adv. Intell. Syst. Comput. Adv. Comput. Vis. 256–264 (2019).  https://doi.org/10.1007/978-3-030-17798-0_22Google Scholar
  15. 15.
    Esteva, A., Kuprel, B., Novoa, R.A., Ko, J., Swetter, S.M., Blau, H.M., Thrun, S.: Dermatologist-level classification of skin cancer with deep neural networks. Nature 542(7639), 115–118 (2017).  https://doi.org/10.1038/nature21056CrossRefGoogle Scholar
  16. 16.
    Fakhrulddin, A.H., Fei, X., Li, H.: Convolutional neural networks (cnn) based human fall detection on body sensor networks (bsn) sensor data. In 2017 4th ICSAI (Nov 2017)Google Scholar
  17. 17.
    Núñez-Marcos, A., Azkune, G., Arganda-Carreras, I.: Vision-based fall detection with convolutional neural networks. Wirel. Commun. Mob. Comput. 2017, (2017).  https://doi.org/10.1155/2017/9474806CrossRefGoogle Scholar
  18. 18.
    Lu, N., Wu, Y., Feng, L., Song, J.: Deep learning for fall detection: three-dimensional CNN combined with LSTM on video kinematic data. IEEE J. Biomed. Health Inform. 23(1), 314–323 (2019).  https://doi.org/10.1109/jbhi.2018.2808281CrossRefGoogle Scholar
  19. 19.
    Martínez-Villaseñor, L., Ponce, H., Brieva, J., Moya-Albor, E., Núñez-Martínez, J., Peñafort-Asturiano, C.: UP-fall detection dataset: a multimodal approach. Sensors 19(9), 1988 (2019).  https://doi.org/10.3390/s19091988CrossRefGoogle Scholar
  20. 20.
    Zhang, Z., Conly, C., Athitsos, V.: A survey on vision-based fall detection. Proceedings of the 8th ACM international conference on Pervasive technologies related to assistive environments. ACM (2015). http://dx.doi.org/10.1145/2769493.2769540
  21. 21.
    Casilari, E., Santoyo-Ramón, J., Cano-García, J.: Analysis of public datasets for wearable fall detection systems. Sensors 17(7), 1513 (2017).  https://doi.org/10.3390/s17071513CrossRefGoogle Scholar
  22. 22.
    Kong, Y., Huang, J., Huang, S., Wei, Z., Wang, S.: Learning spatiotemporal representations for human fall detection in surveillance video. J. Vis. Commun. Image Represent. 59, 215–230 (2019).  https://doi.org/10.1016/j.jvcir.2019.01.024CrossRefGoogle Scholar
  23. 23.
    Lara, O.D., Labrador, M.A.: A survey on human activity recognition using wearable sensors. IEEE Commun. Surv. Tutor. 15, 1192–1209 (2013)CrossRefGoogle Scholar
  24. 24.
    Yin, J., Yang, Q., Pan, J.: Sensor-based abnormal human-activity detection. IEEE Trans. Knowl. Data Eng. 20(8), 1082–1090 (2008).  https://doi.org/10.1109/tkde.2007.1042CrossRefGoogle Scholar
  25. 25.
    Xu, X., Tang, J., Zhang, X., Liu, X., Zhang, H., Qiu, Y.: Exploring techniques for vision based human activity recognition: methods, systems, and evaluation. Sensors 13(2), 1635–1650 (2013).  https://doi.org/10.3390/s130201635CrossRefGoogle Scholar
  26. 26.
    Dungkaew, T., Suksawatchon, J., Suksawatchon, U.: Impersonal smartphone-based activity recognition using the accelerometer sensory data. In: 2017 2nd International Conference on Information Technology (INCIT) (2017).  https://doi.org/10.1109/incit.2017.8257856
  27. 27.
    Bharti, P.: Complex activity recognition with multi-modal multi-positional body sensing. J. Biom. Biostat. 08(05) (2017).  https://doi.org/10.4172/2155-6180-c1-005
  28. 28.
    Kwolek, B., Kepski, M.: Human fall detection on embedded platform using depth maps and wireless accelerometer. Comput. Methods Programs Biomed. 117(3), 489–501 (2014)CrossRefGoogle Scholar
  29. 29.
    Mao, A., Ma, X., He, Y., & Luo, J.: Highly portable, sensor-based system for human fall monitoring. Sensors (Switzerland), 17(9). https://doi.org/10.3390/s17092096 (2017)CrossRefGoogle Scholar
  30. 30.
    Chetty, G., White, M., Singh, M., Mishra, A.: Multimodal activity recognition based on automatic feature discovery. in: 2014 International Conference on Computing for Sustainable Global Development (INDIACom) (2014).  https://doi.org/10.1109/indiacom.2014.6828039
  31. 31.
    Kerdjidj, O., Ramzan, N., Ghanem, K., Amira, A., Chouireb, F.: Fall detection and human activity classification using wearable sensors and compressed sensing. J. Ambient. Intell. Humanized Comput. (2019).  https://doi.org/10.1007/s12652-019-01214-4CrossRefGoogle Scholar
  32. 32.
    Torres-Huitzil, C., Nuno-Maganda, M.: Robust smartphone-based human activity recognition using a tri-axial accelerometer. In: 2015 IEEE 6th Latin American Symposium on Circuits & Systems (LASCAS) (2015).  https://doi.org/10.1109/lascas.2015.7250435
  33. 33.
    Vilarinho, T., Farshchian, B., Bajer, D.G., Dahl, O.H., Egge, I., Hegdal, S.S., Lønes, A., Slettevold, J.N., Weggersen, S.M.: A combined smartphone and smartwatch fall detection system. In: Proceedings of the 2015 IEEE International Conference on Computer and Information Technology; Ubiquitous Computing and Communications; Dependable, Autonomic and Secure Computing; Pervasive Intelligence and Computing, pp. 1443–1448 Liverpool, UK, 26–28 October 2015Google Scholar
  34. 34.
    Koshmak, G., Loutfi, A., Linden, M.: Challenges and issues in multisensor fusion approach for fall detection: review paper. J. Sens. 2016, 1–12 (2016).  https://doi.org/10.1155/2016/6931789CrossRefGoogle Scholar
  35. 35.
    Turaga, P., Chellappa, R., Subrahmanian, V.S., Udrea, O.: Machine recognition of human activities: a survey. IEEE Trans. Circuits Syst. Video Technol. 18, 1473–1488 (2008)CrossRefGoogle Scholar
  36. 36.
    Raty, T.D.: Survey on contemporary remote surveillance systems for public safety. IEEE Trans. Syst. Man Cybern. Part C: Appl. Rev. 40, 493–515 (2010)CrossRefGoogle Scholar
  37. 37.
    Albanese, M., Chellappa, R., Moscato, V., Picariello, A., Subrahmanian, V.S., Turaga, P., Udrea, O.: A constrained probabilistic petri net framework for human activity detection in video. IEEE Trans. Multimed. 10, 1429–1443 (2008)CrossRefGoogle Scholar
  38. 38.
    Zerrouki, N., Houacine, A.: Combined curvelets and hidden Markov models for human fall detection. Multimed. Tools Appl. 77(5), 6405–6424 (2017).  https://doi.org/10.1007/s11042-017-4549-5CrossRefGoogle Scholar
  39. 39.
    Auvinet, E., Multon, F., Saint-Arnaud, A., Rousseau, J., Meunier, J.: Fall detection with multiple cameras: an occlusion resistant method based on 3-D silhouette vertical distribution. IEEE Trans. Inf. Technol. Biomed. 15(2), 290–300 (2011)CrossRefGoogle Scholar
  40. 40.
    Charfi, I., Miteran, J., Dubois, J., Atri, M., Tourki, R.: Definition and performance evaluation of a robust SVM based fall detection solution. SITIS 12, 218–224 (2012)Google Scholar
  41. 41.
    Kozina, S., Gjoreski, H., Gams, M., Luštrek, M.: Efficient activity recognition and fall detection using accelerometers. In: Communications in Computer and Information Science Evaluating AAL Systems Through Competitive Benchmarking pp. 13–23 (2013).  https://doi.org/10.1007/978-3-642-41043-7_2Google Scholar
  42. 42.
    Rougier, C., Meunier, J., St-Arnaud, A., Rousseau, J.: Robust video surveillance for fall detection based on human shape deformation. IEEE Trans. Circuits Syst. Video Technol. 21(5), 611–622 (2011).  https://doi.org/10.1109/tcsvt.2011.2129370CrossRefGoogle Scholar
  43. 43.
    Zhang, Z., Conly, C., & Athitsos, V. (2014). Evaluating depth-based computer vision methods for fall detection under occlusions . pp. 196–207. https://doi.org/10.1007/978-3-319-14364-4_19CrossRefGoogle Scholar
  44. 44.
    Thome, N., Miguet, S., Ambellouis, S.: A real-time, multiview fall detection system: a LHMM-based approach. IEEE Trans. Circuits Syst. Video Technol. 18(11), 1522–1532 (2008).  https://doi.org/10.1109/tcsvt.2008.2005606CrossRefGoogle Scholar
  45. 45.
    Anderson, D., Luke, R.H., Keller, J.M., Skubic, M., Rantz, M., Aud, M.: Linguistic summarization of video for fall detection using voxel person and fuzzy logic. Comput. Vis. Image Underst. 113(1), 80–89 (2009).  https://doi.org/10.1016/j.cviu.2008.07.006CrossRefGoogle Scholar
  46. 46.
    Wang, K., Cao, G., Meng, D. Chen, W., Cao, W.: Automatic fall detection of human in video using combination of features. In: Proceedings of the 2016 IEEE International Conference on Bioinformatics and Biomedicine, BIBM 2016, pp. 1228–1233, China (December 2016)Google Scholar
  47. 47.
    Nait Aicha, A., Englebienne, G., van Schooten, K.S., Pijnappels, M., Kröse, B.: Deep learning to predict falls in older adults based on daily-life trunk accelerometry. Sensors (Basel, Switzerland) 18(5), 1–14 (2018).  https://doi.org/10.3390/s18051654CrossRefGoogle Scholar
  48. 48.
    Shieh, W., Huang, J.: Falling-incident detection and throughput enhancement in a multi-camera video-surveillance system. Med. Eng. Phys. 34(7), 954–963 (2012).  https://doi.org/10.1016/j.medengphy.2011.10.016CrossRefGoogle Scholar
  49. 49.
    Mousse, M.A., Motamed, C., Ezin, E.C.: Percentage of human-occupied areas for fall detection from two views. Vis. Comput. 33(12), 1529–1540 (2016).  https://doi.org/10.1007/s00371-016-1296-yCrossRefGoogle Scholar
  50. 50.
    Zhang, S., Li, Z., Wei, Z., Wang, S.: An automatic human fall detection approach using RGBD cameras. in: 2016 5th International Conference on Computer Science and Network Technology (ICCSNT) (2016). https://doi.org/10.1109/iccsnt.2016.8070265
  51. 51.
    Hekmat, M., Mousavi, Z., Aghajan, H.: Multi-view Feature fusion for activity classification. In: Proceedings of the 10th International Conference on Distributed Smart Camera - ICDSC 16 (2016).  https://doi.org/10.1145/2967413.2967434
  52. 52.
    Su, S., Wu, S., Chen, S., Duh, D., Li, S.: Multi-view fall detection based on spatio-temporal interest points. Multimed. Tools Appl. 75(14), 8469–8492 (2015).  https://doi.org/10.1007/s11042-015-2766-3CrossRefGoogle Scholar
  53. 53.
    LeCun, Y., Bengio, Y., Hinton, G.: Deep learning. Nature 521(7553), 436–444 (2015)CrossRefGoogle Scholar
  54. 54.
    Hsieh, Y.-Z., Jeng, Y.-L.: Development of home intelligent fall detection IoT system based on feedback optical flow convolutional neural network. IEEE Access 6, 6048–6057 (2018).  https://doi.org/10.1109/access.2017.2771389CrossRefGoogle Scholar
  55. 55.
    Akula, N.V.A., Shah, A.K., Ghosh, R.: A spatio-temporal deep learning approach for human action recognition in infrared videos. Opt. Photonics Inf. Process. XII (2018).  https://doi.org/10.1117/12.2502993CrossRefGoogle Scholar
  56. 56.
    Banos, O., Galvez, J.-M., Damas, M., Pomares, H., Rojas, I.: Window size impact in human activity recognition. Sensors 14(4), 6474–6499 (2014).  https://doi.org/10.3390/s140406474CrossRefGoogle Scholar
  57. 57.
    Khalid, S., Khalil, T., Nasreen, S.: A survey of feature selection and feature extraction techniques in machine learning. In: Proceedings of the Science and Information Conference (SAI), London, UK, 27–29 August 2014Google Scholar
  58. 58.
    Horn, B.K.P., Schunck, B.G.: Determining optical flow. Artif. Intell. 17(1–3), 185–203 (1981)CrossRefGoogle Scholar
  59. 59.
    Krizhevsky, A., Sutskever, I. Hinton, G.E.: Imagenet classification with deep convolutional neural networks. In: Proceedings of the 26th Annual Conference on Neural Information Processing Systems (NIPS ’12), pp. 1097–1105, Lake Tahoe, Nev, USA (December 2012)Google Scholar
  60. 60.
    Zeiler, M.D., Fergus, R.: Visualizing and understanding convolutional networks. In: ECCV, pp. 818–833 (2014)Google Scholar
  61. 61.
    Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. CoRR, abs/1409.1556 (2014). 1, 2, 3Google Scholar
  62. 62.
    Szegedy, C., Liu, W., Jia, Y., Sermanet, P., Reed, S., Anguelov, D., Erhan, D., Vanhoucke, V., Rabinovich, A.: Going deeper with convolutions. CoRR, abs/1409.4842 (2014) 1, 2Google Scholar
  63. 63.
    He, K., Zhang, X., Ren, S., Sun, J.: Spatial pyramid pooling in deep convolutional networks for visual recognition. In: Proceeding 13th European Conference Computer Vision, pp. 346–361 (2014)CrossRefGoogle Scholar
  64. 64.
    Francois, C., et al.: Keras (2015) https://github.com/fchollet/keras
  65. 65.
    Auvinet, E., Rougier, C., Meunier, J., St-Arnaud, A., Rousseau, J.: Multiple cameras fall dataset. DIRO-Université de Montréal, Tech. Rep. 1350 (2010)Google Scholar
  66. 66.
    Wang, S., Chen, L., Zhou, Z., Sun, X., Dong, J.: Human fall detection in surveillance video based on PCANet. Multimed. Tools Appl. 75(19), 11603–11613 (2015).  https://doi.org/10.1007/s11042-015-2698-yCrossRefGoogle Scholar
  67. 67.
    Sucerquia, A., López, J.D., Vargas-Bonilla, F.: Real-Life/Real-Time Elderly Fall Detection with a Triaxial Accelerometer (2018).  https://doi.org/10.20944/preprints201711.0087.v3

Copyright information

© Springer Nature Switzerland AG 2020

Authors and Affiliations

  1. 1.Facultad de IngenieríaUniversidad PanamericanaAguascalientesMexico
  2. 2.Facultad de IngenieríaUniversidad PanamericanaCiudad de MéxicoMexico

Personalised recommendations