Skip to main content

Automatic Detection of Sexist Statements Commonly Used at the Workplace

  • Conference paper
  • First Online:
Trends and Applications in Knowledge Discovery and Data Mining (PAKDD 2020)

Part of the book series: Lecture Notes in Computer Science ((LNAI,volume 12237))

Included in the following conference series:

Abstract

Detecting hate speech in the workplace is a unique classification task, as the underlying social context implies a subtler version of conventional hate speech. Applications regarding a state-of-the-art workplace sexism detection model include aids for Human Resources departments, AI chatbots and sentiment analysis. Most existing hate speech detection methods, although robust and accurate, focus on hate speech found on social media, specifically Twitter. The context of social media is much more anonymous than the workplace, therefore it tends to lend itself to more aggressive and “hostile” versions of sexism. Therefore, datasets with large amounts of “hostile” sexism have a slightly easier detection task since “hostile” sexist statements can hinge on a couple words that, regardless of context, tip the model off that a statement is sexist. In this paper we present a dataset of sexist statements that are more likely to be said in the workplace as well as a deep learning model that can achieve state-of-the art results. Previous research has created state-of-the-art models to distinguish “hostile” and “benevolent” sexism based simply on aggregated Twitter data. Our deep learning methods, initialized with GloVe or random word embeddings, use LSTMs with attention mechanisms to outperform those models on a more diverse, filtered dataset that is more targeted towards workplace sexism, leading to an F1 score of 0.88.

Supported by Stanford University and ISEP.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 39.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 54.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Similar content being viewed by others

References

  1. Aarons, E.: Ada Hegerberg: First Women’s Ballon d’Or Marred as Winner is Asked to Twerk. The Guardian (2018). https://www.theguardian.com/football/2018/dec/03/ballon-dor-ada-hegerberg-twerk-luka-modric

  2. Badjatiya, P., Gupta, S., Gupta, M., Varma, V.: Deep learning for hate speech detection in tweets. In: 26th International Conference on World Wide Web Companion, pp. 759–760 (2017)

    Google Scholar 

  3. Bahdanau, D., Cho, K., Bengio, Y.: Neural machine translation by jointly learning to align and translate. In: International Conference on Learning Representations (2015)

    Google Scholar 

  4. Bolukbasi, T., et al.: Man is to computer programmer as woman is to homemaker? Debiasing word embeddings. In: NIPS (2016)

    Google Scholar 

  5. Brinton, M.: Gender inequality and women in the workplace. Harvard Summer School Review, May 2017. https://www.summer.harvard.edu/inside-summer/gender-inequality-women-workplace

  6. Chira, S., Milord, B.: “‘Is there a man i can talk to?’: stories of sexism in the workplace”. The New York times. The New York Times, June 2017. www.nytimes.com/2017/06/20/business/women-react-to-sexism-in-the-workplace.html

  7. Elfrink, T.: Trolls hijacked a scientist’s image to attack Katie Bouman. They picked the wrong astrophysicist. The Washington Post, p. 2019, April 2019. https://www.washingtonpost.com/nation/2019/04/12/trolls-hijacked-scientists-image-attack-katie-bouman-they-picked-wrong-astrophysicist

  8. Founta, A.M., Chatzakou, D., Kourtellis, N., Blackburn, J., Vakali, A., Leontiadis, I.: A unified deep learning architecture for abuse detection. CoRR (2018). http://arxiv.org/abs/1802.00385

  9. Glick, P., Fiske, S.T.: The ambivalent sexism inventory: differentiating hostile and benevolent sexism. J. Pers. Soc. Psychol. 3, 491–512 (1996)

    Article  Google Scholar 

  10. Goel, S., Madhok, R., Garg, S.: Proposing contextually relevant quotes for images. In: Pasi, G., Piwowarski, B., Azzopardi, L., Hanbury, A. (eds.) ECIR 2018. LNCS, vol. 10772, pp. 591–597. Springer, Cham (2018). https://doi.org/10.1007/978-3-319-76941-7_49

    Chapter  Google Scholar 

  11. Griggs, M.B.: Online trolls are harassing a scientist who helped take the first picture of a black hole. The Verge, April 2019. https://www.theverge.com/2019/4/13/18308652/katie-bouman-black-hole-science-internet

  12. Jha, A., Mamidi, R.: When does a compliment become sexist? Analysis and classification of ambivalent sexism using Twitter data. In: Proceedings of the Second Workshop on NLP and Computational Social Science edn. Association for Computational Linguistics (2017)

    Google Scholar 

  13. Kabbani, R.: Checking Eligibility of Google and Microsoft Machine Learning Tools for use by JACK e-Learning System (2016)

    Google Scholar 

  14. Krivkovich, A., Robinson, K., Starikova, I., Valentino, R., Yee, L.: Women in the Workplace. McKinsey & Company (2017). https://www.mckinsey.com/featured-insights/gender-equality/women-in-the-workplace-2017

  15. Lee, C.: Understanding bidirectional RNN in pytorch. Towards Data Science, November 2017. http://towardsdatascience.com/understanding-bidirectional-rnn-in-pytorch-5bd25a5dd66

  16. McCormack, C.: 18 sexist phrases we should stop using immediately. MSN (2018). https://www.msn.com/en-us/Lifestyle/smart-living/18-sexist-phrases-we-should-stop-using-immediately/ss-BBLgg1E/

  17. Mervosh, S.: How Katie Bouman accidentally became the face of the black hole project. The New York Times, April 2019. https://www.nytimes.com/2019/04/11/science/katie-bouman-black-hole.html

  18. @MIT\(\_\)CSAIL, April 2019. https://twitter.com/MIT_CSAIL/status/1116020858282180609

  19. BBC News: Cern scientist alessandro strumia suspended after comments. BBC News, October 2018. https://www.bbc.com/news/world-europe-45709205

  20. Palus, S.: We annotated that horrible article about how women don’t like physics. Slate, March 2019. https://slate.com/technology/2019/03/women-dont-like-physics-article-annotated.html

  21. Pennington, J., Socher, R., Manning, C.D.: Glove: global vectors for word representation (2014)

    Google Scholar 

  22. Pitsilis, G., Ramampiaro, H., Langseth, H.: Detecting offensive language in tweets using deep learning. CoRR (2018)

    Google Scholar 

  23. Priestley, A.: Six common manifestations of everyday sexism at work. SmartCompany, October 2017. www.smartcompany.com.au/people-human-resources/six-common-manifestations-everyday-sexism-work/

  24. Resnick, B.: Male scientists are often cast as lone geniuses. Here’s what happened when a woman was. Vox, April 2019. https://www.vox.com/science-and-health/2019/4/16/18311194/black-hole-katie-bouman-trolls

  25. Waseem, Z., Hovy, D.: Hateful symbols or hateful people? Predictive features for hate speech detection on Twitter. In: Proceedings of the NAACL Student Research Workshop at Association for Computational Linguistics 2016, pp. 88–93 (2016)

    Google Scholar 

  26. Wolfe, L.: Sexist comments and quotes made by the media. The Balance Careers, March 2019. www.thebalancecareers.com/sexist-comments-made-by-media-3515717

  27. Zhao, J., Zhou, Y., Li, Z., Wang, W., Chang, K.W.: Learning gender-neutral word embeddings. In: ACL, pp. 4847–4853 (2018)

    Google Scholar 

  28. Zhou, P., et al.: Attention-based bidirectional long short-term memory networks for relation classification. In: ACL (2016). http://www.aclweb.org/anthology/P16-2034

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Dylan Grosz .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2020 Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Grosz, D., Conde-Cespedes, P. (2020). Automatic Detection of Sexist Statements Commonly Used at the Workplace. In: Lu, W., Zhu, K.Q. (eds) Trends and Applications in Knowledge Discovery and Data Mining. PAKDD 2020. Lecture Notes in Computer Science(), vol 12237. Springer, Cham. https://doi.org/10.1007/978-3-030-60470-7_11

Download citation

  • DOI: https://doi.org/10.1007/978-3-030-60470-7_11

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-030-60469-1

  • Online ISBN: 978-3-030-60470-7

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics