Advertisement

Women Worry About Family, Men About the Economy: Gender Differences in Emotional Responses to COVID-19

  • Isabelle van der Vegt
  • Bennett KleinbergEmail author
Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 12467)

Abstract

Among the critical challenges around the COVID-19 pandemic is dealing with the potentially detrimental effects on people’s mental health. Designing appropriate interventions and identifying the concerns of those most at risk requires methods that can extract worries, concerns and emotional responses from text data. We examine gender differences and the effect of document length on worries about the ongoing COVID-19 situation. Our findings suggest that i) short texts do not offer as adequate insights into psychological processes as longer texts. We further find ii) marked gender differences in topics concerning emotional responses. Women worried more about their loved ones and severe health concerns while men were more occupied with effects on the economy and society. This paper adds to the understanding of general gender differences in language found elsewhere, and shows that the current unique circumstances likely amplified these effects. We close this paper with a call for more high-quality datasets due to the limitations of Tweet-sized data.

Keywords

Gender differences COVID-19 Emotions Language 

References

  1. 1.
    Kleinberg, B., van der Vegt, I., Mozes, M.: Measuring emotions in the COVID-19 real world worry dataset. arXiv:2004.04225 [cs]. http://arxiv.org/abs/2004.04225 (2020)
  2. 2.
    Pennebaker, J.W., Boyd, R.L., Jordan, K., Blackburn, K.: The development and psychometric properties of LIWC2015. The University of Texas at Austin (2015). https://repositories.lib.utexas.edu/handle/2152/31333
  3. 3.
    Newman, M.L., Groom, C.J., Handelman, L.D., Pennebaker, J.W.: Gender differences in language use: an analysis of 14,000 text samples. Discourse Process. 45(3), 211–236 (2008).  https://doi.org/10.1080/01638530802073712CrossRefGoogle Scholar
  4. 4.
    Lakens, D.: Calculating and reporting effect sizes to facilitate cumulative science: a practical primer for t-tests and ANOVAs. Front. Psychol. 4, 863 (2013).  https://doi.org/10.3389/fpsyg.2013.00863CrossRefGoogle Scholar
  5. 5.
    Cohen, J.: Statistical Power Analysis for the Behavioral Sciences. Academic Press, Cambridge (1988)zbMATHGoogle Scholar
  6. 6.
    Schwartz, H.A., et al.: Personality, gender, and age in the language of social media: the open-vocabulary approach. PLoS ONE 8(9), e73791 (2013).  https://doi.org/10.1371/journal.pone.0073791CrossRefGoogle Scholar
  7. 7.
    Blei, D.M., Lafferty, J.D.: A correlated topic model of science. Ann. Appl. Stat. 1, 17–35 (2007). https://projecteuclid.org/euclid.aoas/1183143727
  8. 8.
    Ortega, A., Navarrete, G.: Bayesian hypothesis testing: an alternative to null hypothesis significance testing (NHST) in psychology and social sciences. Bayesian Infer. (2017).  https://doi.org/10.5772/intechopen.70230CrossRefGoogle Scholar
  9. 9.
    Wagenmakers, E.-J., Lodewyckx, T., Kuriyal, H., Grasman, R.: Bayesian hypothesis testing for psychologists: a tutorial on the Savage-Dickey method. Cogn. Psychol. 60(3), 158–189 (2010).  https://doi.org/10.1016/j.cogpsych.2009.12.001CrossRefGoogle Scholar
  10. 10.
    Roberts, M.E., Stewart, B.M., Tingley, D.: stm: R package for structural topic models. J. Stat. Softw. 41, 1–40 (2014)Google Scholar
  11. 11.
    Mimno, D., Wallach, H., Talley, E., Leenders, M., McCallum, A.: Optimizing semantic coherence in topic models, vol. 11 (2011)Google Scholar
  12. 12.
    Kruschke, J.K.: Bayesian estimation supersedes the T test. J. Exp. Psychol. Gen. 142(2), 573–603 (2013).  https://doi.org/10.1037/a0029146CrossRefGoogle Scholar
  13. 13.
    Kruschke, J.K., Liddell, T.M.: The Bayesian new statistics: hypothesis testing, estimation, meta-analysis, and power analysis from a bayesian perspective. Psychon. Bull. Rev. 25(1), 178–206 (2017).  https://doi.org/10.3758/s13423-016-1221-4CrossRefGoogle Scholar
  14. 14.
    Banda, J.M., et al.: A Twitter dataset of 150 + million tweets related to COVID-19 for open research. Zenodo, 5 April 2020.  https://doi.org/10.5281/zenodo.3738018
  15. 15.
    Chen, E., Lerman, K., Ferrara, E.: #COVID-19: the first public coronavirus Twitter dataset. Python. (2020). https://github.com/echen102/COVID-19-TweetIDs
  16. 16.
    Lamsal, R.: Corona virus (COVID-19) tweets dataset. IEEE, 13 March 2020. https://ieee-dataport.org/open-access/corona-virus-covid-19-tweets-dataset
  17. 17.
    Jacobs, C.: Coronada: Tweets about COVID-19. Python. (2020). https://github.com/BayesForDays/coronada
  18. 18.
    Basile, V., Caselli, T.: TWITA - long-term social media collection at the university of Turin, 17 April 2020. http://twita.di.unito.it/dataset/40wita
  19. 19.
    Morstatter, F., Pfeffer, J., Liu, H., Carley, K.M.: Is the sample good enough? comparing data from Twitter’s streaming API with Twitter’s FIrehose. arXiv:1306.5204 [physics]. http://arxiv.org/abs/1306.5204 (2013)
  20. 20.
    Solymosi, R., Bowers, K.J., Fujiyama, T.: Crowdsourcing subjective perceptions of neighbourhood disorder: interpreting bias in open data. Br. J. Criminol. 58(4), 944–967 (2018).  https://doi.org/10.1093/bjc/azx048CrossRefGoogle Scholar
  21. 21.
    Pfeffer, J., Mayer, K., Morstatter, F.: Tampering with Twitter’s sample API. EPJ Data Science 7(1), 1–21 (2018).  https://doi.org/10.1140/epjds/s13688-018-0178-0CrossRefGoogle Scholar

Copyright information

© Springer Nature Switzerland AG 2020

Authors and Affiliations

  1. 1.Department of Security and Crime ScienceUniversity College LondonLondonUK
  2. 2.Dawes Centre for Future CrimeUniversity College LondonLondonUK

Personalised recommendations