Skip to main content

BERT-Based Models with BiLSTM for Self-chronic Stress Detection in Tweets

  • Conference paper
  • First Online:
Artificial Intelligence, Data Science and Applications (ICAISE 2023)

Abstract

Recently, the swift increase of social media platforms has provided a rich source for studying several users’ psychological phenomena. For instance, stress identification in text content can lead to knowing some insights about social media users’ mental health. Actually, chronic stress has a huge negative impact that requires the development of different methods for early detection and diagnosis. In this paper, we propose a deep learning approach and a Natural Language Processing (NLP) method to reveal self-reported chronic stress from tweets. In effect, we have implemented distinct pre-trained BERT (Bi-directional Encoder Representations from Transformers) embedding models, along with a BiLSTM (Bidirectional Long Short-Term Memory) classifier. Actually, we fine-tuned the pre-trained BERT models by leveraging their powerful contextual representation. Next, the output of the embedding is fed into a BiLSTM model which further refines the stress classification by capturing the sequential dependencies in the tweet text. Experiments disclosed that BERT with Talking-Heads Attention architecture is the best model for such text classification tasks. Moreover, our suggested model has achieved good performance and surpassed the baseline architectures for chronic stress detection in Twitter data.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 149.00
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 199.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Similar content being viewed by others

References

  1. Pozos-Radillo, B.E., Preciado-Serrano, M.D.L., Acosta-Fernández, M., et al.: Academic stress as a predictor of chronic stress in university students. Psicol. Educ. 20, 47–52 (2014). https://doi.org/10.1016/j.pse.2014.05.006

    Article  Google Scholar 

  2. Marin, M.F., Lord, C., Andrews, J., et al.: Chronic stress, cognitive functioning and mental health. Neurobiol. Learn. Mem. 96, 583–595 (2011). https://doi.org/10.1016/j.nlm.2011.02.016

    Article  Google Scholar 

  3. Dai, S., Mo, Y., Wang, Y., et al.: Chronic stress promotes cancer development. Front. Oncol. 10, 1492 (2020). https://doi.org/10.3389/fonc.2020.01492

    Article  Google Scholar 

  4. Girardi, D., Lanubile, F., Novielli, N.: Emotion detection using noninvasive low cost sensors. In: 2017 7th International Conference on Affective Computing and Intelligent Interaction, ACII 2017, pp. 125–130 (2018)

    Google Scholar 

  5. Tariq, S., Akhtar, N., Afzal, H., et al.: A novel co-training-based approach for the classification of mental illnesses using social media posts. IEEE Access 7, 166165–166172 (2019). https://doi.org/10.1109/ACCESS.2019.2953087

    Article  Google Scholar 

  6. Lin, T., Chen, C., Tzeng, Y., Lee, L.: NCUEE-NLP@SMM4H’22: Classification of Self-reported Chronic Stress on Twitter Using Ensemble Pre-trained Transformer Models. In: Proceedings of The Seventh Workshop on Social Media Mining for Health Applications, Workshop & Shared Task, Gyeongju, Republic of Korea, Association for Computational Linguistics, pp. 62–64 (2022)

    Google Scholar 

  7. Thammasan, N., Moriyama, K., Fukui, K., Numao, M.: Familiarity effects in EEG-based emotion recognition. Brain Inf. 4, 39–50 (2017). https://doi.org/10.1007/s40708-016-0051-5

    Article  Google Scholar 

  8. Gaikwad, G., Joshi, D.J.: Multiclass Mood classification on twitter using lexicon dictionary and machine learning algorithms. Proc. Int. Conf. Inven. Comput. Technol. ICICT 2016, 1–6 (2016). https://doi.org/10.1109/INVENTIVE.2016.7823247

    Article  Google Scholar 

  9. Yang, D., Li, W., Zhang, J., et al.: A neuropathological hub identification for Alzheimer’s disease via joint analysis of topological structure and neuropathological burden. In: Proceeding of the International Symposium on Biomedical Imaging, pp. 1–4, Mar 2022. https://doi.org/10.1109/ISBI52829.2022.9761444

  10. Katchapakirin, K., Wongpatikaseree, K., Yomaboot, P., Kaewpitakkun, Y.: Facebook social media for depression detection in the Thai community. In: Proceeding of 2018 15th International Joint Conference on Computer Science and Software Engineering, JCSSE 2018, pp. 1–6 (2018). https://doi.org/10.1109/JCSSE.2018.8457362

  11. Zanwar, S., Wiechmann, D., Qiao, Y., Kerz, E.: MANTIS at SMM4H’2022: pre-trained language models meet a suite of psycholinguistic features for the detection of self-reported chronic stress. In: Proceedings of the Seventh Workshop on Social Media Mining for Health Applications, Workshop & Shared Task, pp. 16–18 (2022)

    Google Scholar 

  12. Brain, G.: TensorFlow Hub. In: TensorFlow. https://tfhub.dev/s?module-type=text-embedding (2021). Accessed 26 May 2023

  13. Devlin, J., Chang, M.W., Lee, K., Toutanova, K.: BERT: pre-training of deep bidirectional transformers for language understanding. In: Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, NAACL HLT 2019, vol. 1, pp. 4171–4186 (2019)

    Google Scholar 

  14. Wikimedia Foundation: Wikipedia Dataset (2018). https://dumps.wikimedia.org/. Accessed 30 June 2023

  15. BooksCorpus Dataset. https://yknzhu.wixsite.com/mbweb. Accessed 30 May 2023

  16. Turc, I., Chang, M.-W., Lee, K., Toutanova, K.: Well-read students learn better: on the importance of pre-training compact models (2019). https://doi.org/10.48550/arXiv.1908.08962. Accessed 30 May 2023

  17. Lan, Z., Chen, M., Goodman, S., et al.: ALBERT: A Lite BERT for Self-supervised Learning of Language Representations (2019). https://doi.org/10.48550/arXiv.1909.11942. Accessed 30 May 2023

  18. Clark, K., Luong, M.T., Le, Q.V., Manning, C.D.: Electra: pre-training text encoders as discriminators rather than generators. In: 8th International Conference on Learning Representations, ICLR 2020, pp. 1–18 (2020). https://doi.org/10.48550/arXiv.2003.10555. Accessed 30 May 2023

  19. MEDLINE/PubMed Dataset. https://www.nlm.nih.gov/databases/download/pubmed_medline.html. Accessed 30 May 2023

  20. Shazeer, N., Lan, Z., Cheng, Y., Nan Ding, L.H.: Talking-Heads Attention (2020). https://doi.org/10.48550/arXiv.2003.02436. Accessed 30 May 2023

  21. Shazeer, N.: GLU Variants Improve Transformer (2020). https://doi.org/10.48550/arXiv.2002.05202. Accessed 30 May 2023

  22. Weissenbacher, D., Klein, A.Z., Gascó, L., et al.: Overview of the seventh social media mining for health applications #SMM4H shared tasks at COLING 2022. In: Proceedings of the Seventh Social Media Mining for Health (#SMM4H) Workshop and Shared Task, pp. 221–241 (2022)

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Mohammed Qorich .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2024 The Author(s), under exclusive license to Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Qorich, M., Ouazzani, R.E. (2024). BERT-Based Models with BiLSTM for Self-chronic Stress Detection in Tweets. In: Farhaoui, Y., Hussain, A., Saba, T., Taherdoost, H., Verma, A. (eds) Artificial Intelligence, Data Science and Applications. ICAISE 2023. Lecture Notes in Networks and Systems, vol 838. Springer, Cham. https://doi.org/10.1007/978-3-031-48573-2_54

Download citation

Publish with us

Policies and ethics