Skip to main content
Log in

Web-based objects detection to discover key objects in human activities

  • Original Research
  • Published:
Journal of Ambient Intelligence and Humanized Computing Aims and scope Submit manuscript

Abstract

Aging in place has garnered a lot of interest in the past decade among researchers and politicians. It presents itself as a humane and cost-efficient solution to the worsening problem related to the financing and staffing of our fragile healthcare ecosystem. In that regard, smart environments could help monitor the daily activities of the person and provide information regarding the health status and any potential problem necessitating immediate assistance. To do so, many teams, including ours, have been working on human activity recognition from distributed sensors. Nevertheless, it is still very challenging due to the difficulty of amassing enough data to learn activity models that generalizes well across different residents and different smart environments. Moreover, whenever one wants to add a new activity to the set of recognizable activities, it requires to gather additional data with label. The whole process is prohibitively costly and time consuming. Therefore, in this paper, our team proposed to explore web mining in order to build activity model in an unsupervised fashion. More specifically, using the results of two popular image search engines with automated queries and with the help of object detection/classification models, we learned the set of key objects involved in the realization of activities of daily life. A total of 108 configurations were tested. The experiments showed that the key objects related to activities can be easily extracted with a good accuracy.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6
Fig. 7
Fig. 8
Fig. 9
Fig. 10
Fig. 11
Fig. 12
Fig. 13
Fig. 14
Fig. 15

Similar content being viewed by others

Notes

  1. https://images.google.com/.

  2. https://github.com/CharlesCousyn/search_activities.

  3. https://github.com/CharlesCousyn/image_retrieval.

  4. https://github.com/CharlesCousyn/image_extractor.

  5. http://casas.wsu.edu/datasets/adlnormal.zip.

  6. https://github.com/CharlesCousyn/search_activities.

  7. https://github.com/CharlesCousyn/image_retrieval.

  8. https://github.com/CharlesCousyn/image_extractor.

  9. http://casas.wsu.edu/datasets/adlinterweave.zip.

  10. http://casas.wsu.edu/datasets/adlnormal.zip.

  11. https://github.com/CharlesCousyn/image_extractor/blob/master/configFiles/groundTruth.json.

  12. https://storage.googleapis.com/download.tensorflow.org/data/ImageNetLabels.txt.

  13. https://gist.github.com/AruniRC/7b3dadd004da04c80198557db5da4bda.

  14. https://github.com/pjreddie/darknet/blob/1e729804f61c8627eb257fba8b83f74e04945db7/data/9k.names.

References

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Charles Cousyn.

Ethics declarations

Conflict of interest

The authors declare that they have no conflict of interest.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Cousyn, C., Bouchard, K. & Gaboury, S. Web-based objects detection to discover key objects in human activities. J Ambient Intell Human Comput 14, 3041–3056 (2023). https://doi.org/10.1007/s12652-021-03433-0

Download citation

  • Received:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s12652-021-03433-0

Keywords

Navigation