Skip to main content
Log in

Deep Context Model (DCM): dual context-attention aware model for recognizing the heterogeneous human activities using smartphone sensors

  • Original Paper
  • Published:
Evolving Systems Aims and scope Submit manuscript

Abstract

Human Activity Recognition (HAR) using smartphone sensors has been identified as a significant emerging research domain. Its application areas exhibit the performance from the intelligent tailored activity monitoring. Researchers have proposed various HAR models to recognize the human activity patterns using traditional smartphone sensor data. In addition, embedding contextual information such as data availability, sensing device orientation, body part location, axis layout, and more makes a fruitful impact on the quality of activity sensor data. The research challenge occurred due to the lack of contextual information in sensor data, leading to activity patterns ambiguity. Often, the motion sensor was separately used to acquire contextual information that consumes unnecessary computational resources. In this paper, we have used activity sensor data availability as contextual information. The proposed Deep Context Model (DCM) recognizes the activity pattern in the dual context-attention mode, i.e., static and dynamic context. The proposed model consists of convolutional and recurrent networks that find the associated activity patterns in a dual context. The convolutional networks are excellent for automatic feature extraction in a static context, whereas recurrent networks are used in a dynamic context for memorizing patterns. We evaluated the performance of the proposed model using the open accessible KU-HAR dataset. The experimental outcomes revealed that DCM has achieved the F1 score of 98.96% and 99.62% for static and dynamic context, respectively. Further, the robustness and applicability of the proposed model have been gauged using the HHAR dataset.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5

Similar content being viewed by others

Data availability

The experimental datasets i.e. KU-HAR and HHAR are available on the https://data.mendeley.com/datasets/45f952y38r/5 and https://archive.ics.uci.edu/dataset/344/heterogeneity+activity+recognition.

References

Download references

Acknowledgements

The authors appreciate the funding from the UGC, New Delhi through the JRF, and Banaras Hindu University through the Institute of Eminence (IoE) Seed Grant. The professors and researchers in the Department of Computer Science at Banaras Hindu University are also acknowledged by the authors for their continuous and beneficial discussions on this research topic.

Author information

Authors and Affiliations

Authors

Contributions

Although Prabhat Kumar designed the architecture, conducted experiments analyzed the data, and wrote the article, S. Suresh oversaw and evaluated its writing. All of the authors have read and approved the final article.

Corresponding author

Correspondence to Prabhat Kumar.

Ethics declarations

Conflict of interest

No known financial or personal conflicts of interest exist amongst the authors, they declare, which could have affected the work performed for this publication.

Rights and permissions

Springer Nature or its licensor (e.g. a society or other partner) holds exclusive rights to this article under a publishing agreement with the author(s) or other rightsholder(s); author self-archiving of the accepted manuscript version of this article is solely governed by the terms of such publishing agreement and applicable law.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Kumar, P., Suresh, S. Deep Context Model (DCM): dual context-attention aware model for recognizing the heterogeneous human activities using smartphone sensors. Evolving Systems (2024). https://doi.org/10.1007/s12530-024-09570-z

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1007/s12530-024-09570-z

Keywords

Navigation