Skip to main content
Log in

Using Auditory Features for WiFi Channel State Information Activity Recognition

  • Original Research
  • Published:
SN Computer Science Aims and scope Submit manuscript

Abstract

Activity recognition has gained significant attention recently, due to the availability of smartphones and smartwatches with movement sensors which facilitate the collection and processing of relevant measurements, by almost everyone. Using the device-embedded sensors, there is no need of carrying dedicated equipment (inertia measurement units or accelerometers) and use complex software to process the data. This approach though, has the disadvantage of needing to carry a device during the monitoring time. WiFi channel state information (CSI) offers a passive, device-free opportunity, for monitoring activities of daily living in non-line-of-sight conditions. In this paper, Mel frequency cepstral coefficient (MFCC) feature extraction, used successfully for audio signals, is proposed for CSI time-series classification. The applicability of the proposed features in activity recognition has been evaluated using three classification methods, convolutional neural networks (CNN), long short term memory recurrent neural networks and Hidden Markov models, in comparison to currently used feature extraction methods (discrete wavelet transform, short time fourier transform). MFCC feature extraction achieves higher accuracy in activity classification than the compared methods, as verified by evaluation with two activity datasets, in particular 95% accuracy precision achieved in activity recognition using MFCC features in combination with CNN classifier.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6
Fig. 7
Fig. 8

Similar content being viewed by others

Notes

  1. http://www.cse.msu.edu/alexliu/publications/ActivityRecognition/CARMDATA.zip.

References

  1. Politi O, Mporas I, Megalooikonomou V. Human motion detection in daily activity tasks using wearable sensors. In: 2014 22nd European signal processing conference (EUSIPCO); 2014. p. 2315–9.

  2. Xia L, Aggarwal JK. Spatio-temporal depth cuboid similarity feature for activity recognition using depth camera. In: Proceedings of the IEEE conference on computer vision and pattern recognition; 2013. p. 2834–41.

  3. Adib F, Kabelac Z, Katabi D, Miller RC. 3d tracking via body radio reflections. In: 11th USENIX symposium on networked systems design and implementation (NSDI 14). Seattle: USENIX Association; 2014. p. 317–29.

  4. Van Dorp P, Groen FCA. Feature-based human motion parameter estimation with radar. IET Radar Sonar Navig. 2008;2(2):135–45.

    Article  Google Scholar 

  5. Sigg S, Blanke U, Troster G. The telepathic phone: frictionless activity recognition from wifi-rssi. In: 2014 IEEE international conference on pervasive computing and communications (PerCom); 2014. p. 148–55.

  6. Pu Q, Gupta S, Gollakota S, Patel S. Whole-home gesture recognition using wireless signals. In: Proceedings of the 19th annual international conference on mobile computing and networking. ACM; 2013. p. 27–38.

  7. Wang Y, Wu K, Ni LM. Wifall: device-free fall detection by wireless networks. IEEE Trans Mob Comput. 2017;16(2):581–94.

    Article  Google Scholar 

  8. Wang H, Zhang D, Wang Y, Ma J, Wang Y, Li S. Rt-fall: a real-time and contactless fall detection system with commodity wifi devices. IEEE Trans Mob Comput. 2017;16(2):511–26.

    Article  Google Scholar 

  9. Wang W, Liu AX, Shahzad M, Ling K, Lu S. Device-free human activity recognition using commercial wifi devices. IEEE J Sel Areas Commun. 2017;35(5):1118–31.

    Article  Google Scholar 

  10. Yousefi S, Narui H, Dayal S, Ermon S, Valaee S. A survey on behavior recognition using wifi channel state information. IEEE Commun Mag. 2017;55(10):98–104.

    Article  Google Scholar 

  11. Gu Y, Zhan J, Liu Z, Li J, Ji Y, Wang X. Sleepy: adaptive sleep monitoring from afar with commodity wifi infrastructures. In: IEEE wireless communications and networking conference (WCNC); 2018.

  12. Gu Y, Liu T, Li J, Ren F, Liu Z, Wang X, Li P. Emosense: data-driven emotion sensing via off-the-shelf WiFi devices. In: IEEE international conference on communications (ICC) date of conference, 20–24 May 2018; 2018.

  13. Wang G, Zou Y, Zhou Z, Wu K, Ni LM. We can hear you with wi-fi! IEEE Trans Mob Comput. 2016;15(11):2907–20.

    Article  Google Scholar 

  14. Gao Q, Wang J, Ma X, Feng X, Wang H. Csi-based device-free wireless localization and activity recognition using radio image features. IEEE Trans Veh Technol. 2017;66(11):10346–56.

    Article  Google Scholar 

  15. Wu K, Xiao J, Yi Y, Chen D, Luo X, Ni LM. Csi-based indoor localization. IEEE Trans Parallel Distrib Syst. 2013;24(7):1300–9.

    Article  Google Scholar 

  16. Wang X, Gao L, Mao S, Pandey S. Csi-based fingerprinting for indoor localization: a deep learning approach. IEEE Trans Veh Technol. 2017;66(1):763–76.

    Google Scholar 

  17. Bezoui M, Elmoutaouakkil A, Beni-hssane A. Feature extraction of some quranic recitation using mel-frequency cepstral coefficients (mfcc). In: 2016 5th international conference on multimedia computing and systems (ICMCS); 2016. p. 127–131.

  18. Maka T. Change point determination in audio data using auditory features. Int J Electron Telecommun. 2015;61(2):185–90.

    Article  Google Scholar 

  19. Muda L, Begam M, Elamvazuthi I. Voice recognition algorithms using Mel frequency cepstral coefficient (MFCC) and dynamic time warping (DTW) techniques. CoRR abs/1003.4083; 2010.

  20. ETSI. Transmission and quality aspects (STQ) distributed speech recognition Advanced front-end feature extraction algorithm compression algorithms. European Telecommunications Standards Institute ES 202 050 V1.1.5, 2007-01.

  21. Hochreiter S, Schmidhuber J. Long short-term memory. Neural Comput. 1997;9(8):1735–80.

    Article  Google Scholar 

  22. Yang J, Nguyen MN, San PP, Li X, Krishnaswamy S. Deep convolutional neural networks on multichannel time series for human activity recognition. IJCAI; 2015.

  23. Ordóñez JF, Roggen D. Deep convolutional and LSTM recurrent neural networks for multimodal wearable activity recognition. Sensors. 2016;16:115. https://doi.org/10.3390/s16010115.

    Article  Google Scholar 

  24. Halperin D, Wenjun H, Sheth A, Wetherall D. Tool release: gathering 802.11n traces with channel state information. SIGCOMM Comput Commun Rev. 2011;41(1):53.

    Article  Google Scholar 

Download references

Acknowledgements

The authors would like to thank Prof. Wei Wang [9] for useful discussions and for providing us the dataset used in CARM. This work is supported by the EU funded project FrailSafe (H2020-PHC-2015-single-stage, Grant agreement no. 690140).

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Thomas Tegou.

Ethics declarations

Conflict of interest

The authors declare that they have no conflict of interest.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Tegou, T., Papadopoulos, A., Kalamaras, I. et al. Using Auditory Features for WiFi Channel State Information Activity Recognition. SN COMPUT. SCI. 1, 3 (2020). https://doi.org/10.1007/s42979-019-0003-2

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1007/s42979-019-0003-2

Keywords

Navigation