Ackerman, L. (2019). Syntactic and cognitive issues in investigating gendered coreference. Glossa A Journal of General Linguistics. https://doi.org/10.5334/gjgl.721
Article
Google Scholar
Asimov, I. (1942). Runaround. Astounding Science Fiction, 29(1), 94–103.
Google Scholar
Bailey, A. H., LaFrance, M., & Dovidio, J. F. (2018). Is man the measure of all things? A social cognitive account of Androcentrism. Personality and Social Psychology Review, 23(4), 307–331.
Article
Google Scholar
Barrault, L., Bojar, O., Costa-jussà, M. R., Federmann, C., Fishel, M., Graham, Y., Haddow, B., Huck, M., Koehn, P., Malmasi, S. & Monz, C. (2019). Findings of the 2019 conference on machine translation (wmt19). In Proceedings of the Fourth Conference on Machine (pp. 1–61).
Best, S. (2017). Is Google translate sexist? Users report biased results when translating gender-neutral languages into English. The Daily Mail. Retrieved 28 Jan 2020 from https://www.dailymail.co.uk/sciencetech/article-5136607/Is-Google-Translate-SEXIST.html.
Blodgett, S. L., Barocas, S. Daumé III, H., & Wallach, H. (2020). Language (technology) is power: A critical survey of bias in NLP. Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics.
Bolukbasi, T., Chang, K. W., Zou, J. Y., Saligrama, V., & Kalai, A. T. (2016). Man is to computer programmer as woman is to homemaker? debiasing word embeddings. Advances in Neural Information Processing Systems, 1, 4349–4357.
Google Scholar
Cao, Y. T. & Daumé III, H. (2020). Toward gender-inclusive coreference resolution. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics.
Chinea-Rios, M., Peris, A., & Casacuberta, F. (2017). Adapting neural machine translation with parallel synthetic data. In Proceedings of the Second Conference on Machine Translation (pp. 138–147).
Costa-jussà, M. R., & de Jorge, A. (2020). Fine-tuning Neural Machine Translation on Gender-Balanced Datasets. In Proceedings of the Second Workshop on Gender Bias in Natural Language Processing (pp. 26–34).
Crawford, K. (2017). The trouble with bias. In Conference on Neural Information Processing Systems, invited speaker.
Edunov, S., Ott, M., Ranzato, M. A., & Auli, M. (2019). On the evaluation of machine translation systems trained with back-translation. arXiv preprint arXiv:1908.05204. Retrieved January 5, 2021, from https://arxiv.org/pdf/1908.05204.pdf.
Darwin, H. (2017). Doing gender beyond the binary: A virtual ethnography. Symbolic Interaction, 40(3), 317–334.
Article
Google Scholar
Farajian, M. A., Turchi, M., Negri, M., & Federico, M. (2017). Multi-domain neural machine translation through unsupervised adaptation. In Proceedings of the Second Conference on Machine Translation (pp. 127–137).
Fogg, B. J. (2003). Persuasive technology: Using computers to change what we think and do. Burlington, MA: Morgan Kaufmann Publishers.
Book
Google Scholar
Font, J. E., & Costa-jussà, M. R. (2019). Equalizing gender bias in neural machine translation with word embeddings techniques. In Proceedings of the First Workshop on Gender Bias in Natural Language Processing (pp. 147–154).
French, R. M. (1999). Catastrophic forgetting in connectionist networks. Trends in Cognitive Sciences, 3(4), 128–135.
Article
Google Scholar
Gonen, H., & Goldberg, Y. (2019). Lipstick on a Pig: Debiasing methods cover up systematic gender biases in word embeddings but do not remove them. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers) (pp. 609–614).
Google AI. (2020). Artificial intelligence at Google: Our principles. Retrieved 28 Jan 2020 from https://ai.google/principles.
Government Digital Service (GDS) and Office for Artificial Intelligence (OAI). (2019). Understanding artificial intelligence ethics and safety. Retrieved 28 Jan 2020 from https://www.gov.uk/guidance/understanding-artificial-intelligence-ethics-and-safety.
Heidegger, M. (1954). Die frage nach der technik. Vorträge und Aufsätze (pp. 13–14). Pfullingen: Neske.
Google Scholar
HLEGAI (High Level Expert Group on Artificial Intelligence), European Commission. (2019). Ethics Guidelines for Trustworthy AI. https://ec.europa.eu/digital-single-market/en/news/ethics-guidelines-trustworthy-ai. Accessed 28 Jan 2020.
IEEE. (2020). Ethics in action: IEEE global initiative on ethics of autonomous and intelligent systems, Retrieved 3 Feb 2020 from https://ethicsinaction.ieee.org.
Kilgarriff, A., Baisa, V., Bušta, J., Jakubíček, M., Kovář, V., Michelfeit, J., Rychlý, P & Suchomel, V. (2014). The sketch engine: Ten years on. Lexicography, 1(1), 7–36: Retrieved January 5, 2021, from http://www.sketchengine.eu.
Kirkpatrick, J., Pascanu, R., Rabinowitz, N., Veness, J., Desjardins, G., Rusu, A. A., Milan, K., Quan, J., Ramalho, T., Grabska-Barwinska, A., & Hassabis, D. (2017). Overcoming catastrophic forgetting in neural networks. Proceedings of the National Academy of Sciences, 114(13), 3521–3526.
MathSciNet
Article
Google Scholar
Kopetsch, T. (2010). Dem deutschen Gesundheitswesen gehen die Ärzte aus. Studie zur Altersstruktur und Arztzahlentwicklung, 5, 1–147. Retrieved January 5, 2021, from https://cdn.aerzteblatt.de/download/files/2010/09/down148303.pdf.
Leslie, D. (2019). Understanding artificial intelligence ethics and safety: A guide for the responsible design and implementation of AI systems in the public sector. The Alan Turing Institute, 6. Retrieved January 5, 2021, from https://www.turing.ac.uk/sites/default/files/2019-06/understanding_artificial_intelligence_ethics_and_safety.pdf.
Lloyd, K. (2018). Bias amplification in artificial intelligence systems. arXiv preprint arXiv:1809.07842. Retrieved January 5, 2021, from https://arxiv.org/ftp/arxiv/papers/1809/1809.07842.pdf.
Maudslay, R. H., Gonen, H., Cotterell, R., & Teufel, S. (2019). It’s all in the name: Mitigating gender bias with name-based counterfactual data substitution. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP) (pp. 5270–5278).
Merriam-Webster. (2019). Merriam-Webster’s words of the year 2019. Retrieved July 14, 2020, from https://www.merriam-webster.com/words-at-play/word-of-the-year/they.
Michel, P., & Neubig, G. (2018). Extreme adaptation for personalized neural machine translation. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers) (pp. 312–318).
Olson, P. (2018). The algorithm that made Google Translate sexist. Forbes. Retrieved 28 Jan 2020 from https://www.forbes.com/sites/parmyolson/2018/02/15/the-algorithm-that-helped-google-translate-become-sexist.
Papineni, K., Roukos, S., Ward, T., & Zhu, W. J. (2002). BLEU: A method for automatic evaluation of machine translation. In Proceedings of the 40th Annual Meeting of the Association for Computational linguistics (pp. 311–318).
Post, M. (2018). A call for clarity in reporting BLEU scores. In Proceedings of the Third Conference on Machine Translation: Research Papers (pp. 186–191).
Prates, M. O., Avelar, P. H., & Lamb, L. C. (2019). Assessing gender bias in machine translation: A case study with Google Translate. Neural Computing and Applications, 32(10), 1–19.
Google Scholar
Rudinger, R., Naradowsky, J., Leonard, B., & Van Durme, B. (2018). Gender bias in coreference resolution. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 2 (Short Papers) (pp. 8–14).
Sap, M., Card, D., Gabriel, S., Choi, Y., & Smith, N. A. (2019). The risk of racial bias in hate speech detection. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics (pp. 1668–1678).
Saunders, D. and Byrne, B. (2020). Reducing gender bias in neural machine translation as a domain adaptation problem. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics.
Saunders, D., Sallis, R. and Byrne, B. (2020). Neural machine translation doesn’t translate gender coreference right unless you make it. In Proceedings of the Second Workshop on Gender Bias in Natural Language Processing (pp. 35–43).
Segal, H. (2005). Technological utopianism in American culture. New York, USA: Syracuse University Press.
Google Scholar
Shah, D., Schwartz, H. A., & Hovy, D. (2020). Predictive biases in natural language processing models: A conceptual framework and overview. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics.
Singer, P. (1979). Practical ethics. Cambridge, UK: Cambridge University Press.
Google Scholar
Stanovsky, G., Smith, N. A., & Zettlemoyer, L. (2019). Evaluating gender bias in machine translation. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics (pp. 1679–1684).
Statista. (2019a). Ärztinnen in Deutschland nach Arztgruppe bis 2018. Retrieved 3 Feb 2020 from https://de.statista.com/statistik/daten/studie/158852/umfrage/anzahl-der-aerztinnen-nach-taetigkeitsbereichen/#statisticContainer.
Statista. (2019b). Studierende im Fach Humanmedizin in Deutschland nach Geschlecht bis 2018/2019. Retrieved 3 Feb 2020 from https://de.statista.com/statistik/daten/studie/200758/umfrage/entwicklung-der-anzahl-der-medizinstudenten.
Tan, S., Joty, S., Kan, M. Y., & Socher, R (2020). It’s Morphin Time! Combating linguistic discrimination with inflectional perturbations. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics.
Thaler, R. H., & Sunstein, C. R. (2008). Nudge: Improving decisions about health, wealth, and happiness. London, UK: Yale University Press.
Google Scholar
Vanmassenhove, E., Hardmeier, C., & Way, A. (2018). Getting gender right in neural machine translation. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing (pp. 3003–3008).
Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A. N., Kaiser, Ł. & Polosukhin, I. (2017). Attention is all you need. In Advances in Neural Information Processing Systems (pp. 5998–6008).
Vaswani, A., Bengio, S., Brevdo, E., Chollet, F., Gomez, A.N., Gouws, S., Jones, L., Kaiser, Ł., Kalchbrenner, N., Parmar, N. & Sepassi, R. (2018). Tensor2tensor for neural machine translation. arXiv preprint arXiv:1803.07416. https://arxiv.org/pdf/1803.07416.pdf.
Verbeek, P. P. (2006). Materializing morality: Design ethics and technological mediation. Science, Technology, and Human Values, 31(3), 361–380. https://doi.org/10.1177/0162243905285847
Article
Google Scholar
Verbeek, P. P. (2017). Designing the morality of things: The ethics of behaviour-guiding technology. In J. van den Hoven, S. Miller, & T. Pogge (Eds.), Designing in ethics (pp. 78–94). Cambridge, UK: Cambridge University Press.
Chapter
Google Scholar
Wang, W., Watanabe, T., Hughes, M., Nakagawa, T., & Chelba, C. (2018). Denoising neural machine translation training with trusted data and online data selection. In Proceedings of the Third Conference on Machine Translation: Research Papers (pp. 133–143).
Winfield, A. F., Michael, K., Pitt, J., & Evers, V. (2019). Machine ethics: The design and governance of ethical AI and autonomous systems. Proceedings of the IEEE, 107(3), 509–517.
Article
Google Scholar
Wong, J. C. (2019). The viral selfie app ImageNet Roulette seemed fun – until it called me a racist slur. The Guardian. Retrieved 3 Feb 2020 from https://www.theguardian.com/technology/2019/sep/17/imagenet-roulette-asian-racist-slur-selfie.
Zhao, J., Wang, T., Yatskar, M., Ordonez, V., & Chang, K. W. (2017). Men also like shopping: Reducing gender bias amplification using corpus-level constraints. In Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing.
Zhao, J., Wang, T., Yatskar, M., Ordonez, V., & Chang, K. W. (2018a). Gender bias in coreference resolution: evaluation and debiasing methods. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 2 (Short Papers) (pp. 15–20).
Zhao, J., Zhou, Y., Li, Z., Wang, W., & Chang, K. W. (2018b). Learning gender-neutral word embeddings. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing (pp. 4847–4853).
Zou, J., & Schiebinger, L. (2018). AI can be sexist and racist—it’s time to make it fair. Nature, 559, 324–326. https://doi.org/10.1038/d41586-018-05707-8
Article
Google Scholar
Zmigrod, R., Mielke, S. J., Wallach, H., & Cotterell, R. (2019). Counterfactual data augmentation for mitigating gender stereotypes in languages with rich morphology. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics (pp. 1651–1661).