Skip to main content

Gender Bias in AI: Implications for Managerial Practices

  • Conference paper
  • First Online:
Responsible AI and Analytics for an Ethical and Inclusive Digitized Society (I3E 2021)

Part of the book series: Lecture Notes in Computer Science ((LNISA,volume 12896))

Included in the following conference series:

Abstract

Artificial intelligence (AI) applications are widely employed nowadays in almost every industry impacting individuals and society. As many important decisions are now being automated by various AI applications, fairness is fast becoming a vital concern in AI. Moreover, the organizational applications of AI-enabled decision systems have exacerbated this problem by amplifying the pre-existing societal bias and creating new types of biases. Interestingly, the related literature and industry press suggest that AI systems are often biased towards gender. Specifically, AI hiring tools are often biased towards women. Therefore, it is an increasing concern to reconsider the organizational managerial practices for AI-enabled decision systems to bring fairness in decision making. Additionally, organizations should develop fair, ethical internal structures and corporate strategies and governance to manage the gender imbalance in AI recruitment process. Thus, by systematically reviewing and synthesizing the literature, this paper presents a comprehensive overview of the managerial practices taken in relation to gender bias in AI. Our findings indicate that managerial practices include: better fairness governance practices, continuous training on fairness and ethics for all stakeholders, collaborative organizational learning on fairness & demographic characteristics, interdisciplinary approach & understanding of AI ethical principles, Workplace diversity in managerial roles, designing strategies for incorporating algorithmic transparency and accountability & ensuring human in the loop. In this paper, we aim to contribute to the emerging IS literature on AI by presenting a consolidated picture and understanding of this phenomenon. Based on our findings, we indicate direction for future research in IS for the better development and use of AI systems.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 139.00
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 179.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

References

  • Hoffmann, A.L.: Where fairness fails: data, algorithms, and the limits of anti-discrimination discourse. Inf. Commun. Soc. 22(7), 900–915 (2019)

    Google Scholar 

  • Grari, V., Ruf, B., Lamprier, S., Detyniecki, M.: Achieving fairness with decision tress: an adversarial approach. Data Sci. Eng. 5(2), 99–110 (2020). https://doi.org/10.1007/s41019-020-00124-2

    Article  Google Scholar 

  • Martinez, C.F., Fernandez, A.: AI and recruiting software: ethical and legal implications. J. Behav. Robot. 11(1), 199–216 (2020)

    Google Scholar 

  • Costa, P., Ribas, L.: AI becomes her: discussing gender and artificial intelligence. J. Specul. Res. 17(1–2), 171–193 (2019)

    Google Scholar 

  • Bellamy, R.K.E., et al.: AI fairness 360: an extensible toolkit for detecting, understanding, and mitigating unwanted algorithmic bias. J. Res. Dev. (2018)

    Google Scholar 

  • Hayes, P., van de Poel, I., Steen, M.: Algorithms and values in justice and security. AI Soc. 35(3), 533–555 (2020). https://doi.org/10.1007/s00146-019-00932-9

    Article  Google Scholar 

  • Schonberger, D.: Artificial intelligence in healthcare: a critical analysis of the legal and ethical implications. Int. J. Law Inf. Technol. 27(2), 171–203 (2019)

    Google Scholar 

  • Prates, M., Avelar, P., Lamb, L.C.: Assessing gender bias in machine translation – a case study with Google translate. Neural Comput. Appl. (2019)

    Google Scholar 

  • Kyriazanos, D.M., Thanos, K.G., Thomopoulos, S.C.A.: Automated decisions making in airports checkpoints: bias detection toward smarter security and fairness. Automated security decision-making. IEEE Secur. Appl. 17(2), 8–16 (2019)

    Google Scholar 

  • Johnson, K.N.: Automating the risk of bias. Georg. Wash. Law Rev. 87(6), 1214 (2019)

    Google Scholar 

  • Lambrecht, A., Tucker, C.: Algorithmic bias? An empirical study of apparent gender bias discrimination in the display of STEM career ads. Manage. Sci. 65(7), 2966–2981 (2020)

    Google Scholar 

  • Ntoutsi, E., et al.: Bias in data-driven artificial intelligence systems – an introductory survey. Data Min. Knowl. Disc. 10(3), e1356 (2019)

    Google Scholar 

  • Ibrahim, S.A., Charlson, M.E., Neill, D.B.: Big data analytics and the structure for equity in healthcare: the promise and perils. Health Equity 4(1), 99–101 (2020)

    Google Scholar 

  • Chen, I.Y., Szolovits, P., Ghassemi, M.: Can AI help reduce disparities in general medical and mental health care? AMA J. Ethics 22(2), 167–179 (2019)

    Google Scholar 

  • Qureshi, B., Kamiran, F., Karim, A., Ruggieri, S., Pedreschi, D.: Causal inference for social discrimination reasoning. J. Intell. Inf. Syst. 54(2), 425–437 (2019). https://doi.org/10.1007/s10844-019-00580-x

    Article  Google Scholar 

  • Robert, L.P., Pierce, C., Marquis, L., Kim, S., Alahmad, R.: Designing fair AI for managing employees in organizations: a review, critique, and design agenda. Hum.-Comput. Interact. 35(5–6), 545–575 (2020)

    Google Scholar 

  • Lee, N.T.: Detecting racial bias in algorithms and machine learning. J. Inf. Commun. Ethics Soc. 16(3), 252–260 (2018)

    Google Scholar 

  • Martin, K.: Ethical implications and accountability of algorithms. J. Bus. Ethics 160, 835–850 (2019). https://doi.org/10.1007/s10551-018-3921-3

    Article  Google Scholar 

  • Wu, W., Huang, T., Gong, K.: Ethical principles and governance technology development of AI in China. Engineering 6, 302–309 (2019)

    Google Scholar 

  • Piano, S.L.: Ethical principles in machine learning and artificial intelligence: cases from the field and possible ways forward. Humanit. Soc. Sci. Commun. 7, 1–7 (2020)

    Google Scholar 

  • Miron, M., Tolan, S., Gómez, E., Castillo, C.: Evaluating causes of algorithmic bias in juvenile criminal recidivism. Artif. Intell. Law 29(2), 111–147 (2020). https://doi.org/10.1007/s10506-020-09268-y

    Article  Google Scholar 

  • Arrieta, et al.: Explainable Artificial Intelligence (XAI): Concepts, taxonomies, opportunities, and challenges towards responsible AI. Inf. Fusion 58, 82–115 (2020)

    Google Scholar 

  • Feuerriegel, S., Dolata, M., Schwabe, G.: Fair AI: challenges and opportunities. Bus. Inf. Syst. Eng. 62(4), 379–384 (2020)

    Google Scholar 

  • Veale, M., Binns, R.: Fairer machine learning in the real world: mitigating discrimination without collecting sensitive data. Big Data Soc. 4, 1–17 (2017)

    Google Scholar 

  • Berk, R., Heidari, H., Jabbari, S., Kearns, M., Roth, A.: Fairness in criminal justice risk assessments: the state of the art. Sociol. Methods Res. 50, 3–44 (2018)

    MathSciNet  Google Scholar 

  • Thelwall, M.: Gender bias in machine learning for sentiment analysis. Online Inf. Rev. 42(3), 343–354 (2017)

    Google Scholar 

  • Paulus, J.K., Kent, D.M.: Predictably unequal: understanding and addressing concerns that algorithmic clinical prediction may increase health disparities. Digit. Med. 99(3), 1–8 (2020)

    Google Scholar 

  • Cirillo, D., et al.: Sex and gender differences and biases in artificial intelligence for biomedicine and healthcare. Digit. Med. 8(3), 1–11 (2020)

    MathSciNet  Google Scholar 

  • Noriega, M.: The application of artificial intelligence in police interrogations: an analysis addressing the proposed effect AI has on racial and gender bias, cooperation, and false confessions. Futures 117, 102510 (2020)

    Google Scholar 

  • Wang, L.: The three harms of gendered technology. Australas. J. Inf. Syst. 24 (2020)

    Google Scholar 

  • Ahn, Y., Lin, Y.R.: Fairsight: visual analytics for fairness in decision making. IEEE Trans. Vis. Comput. Graph. 26(1), 1086–1095 (2020)

    Google Scholar 

  • Clifton, J., Glasmeier, A., Gray, M.: When machines think for us: the consequences for work and place. Camb. J. Reg. Econ. Soc. 13(1), 3–23 (2020)

    Google Scholar 

  • Webster, J., Watson, R.T.: Analysing the past to prepare for the future: writing a literature review. MIS Q. 26(2), 3–23 (2002)

    Google Scholar 

  • UNESDOC Digital library, Artificial intelligence, and gender equality: key finding of UNESCO’s global dialogue. https://unesdoc.unesco.org/ark:/48223/pf0000374174

  • Australian Academy of Science. (2019) Women in STEM Decadal Plan (Australian Academy of Science)

    Google Scholar 

  • Agarwal, P.: Gender bias in STEM: women in tech still facing discrimination. Forbes (2020)

    Google Scholar 

  • Altman, M., Wood, A., Vayena, E.: A harm-reduction framework for algorithmic fairness. IEEE Secur. Priv. 16, 34–45 (2018)

    Google Scholar 

  • Bentivogli, L. et al.: Gender in danger? Evaluating speech translation technology on the Must- She Corpus. In: 58th Proceeding of Association for Computational Linguistic (2020)

    Google Scholar 

  • Brunet, M.E., Houlihan. C.A., Anderson, C., Zemel, R.: Understanding the origins of bias in word embedding. In: Proceedings of the 36th International Conference on Machine Learning, Long Beach, California (2019)

    Google Scholar 

  • Bolukbasi, T., Chang, K.W., Zou, J., Saligrama, V., Kalai, A.: Man is to computer programmer as women is to homemaker ? Debiasing word embeddings. Cornell University Computer Science and Artificial Intelligence (2016)

    Google Scholar 

  • Bellamy, R.K.E., et al.: AL fairness 360: an extensible toolkit for detecting, understanding and mitigating unwanted algorithmic bias. Computer Science (2018)

    Google Scholar 

  • Berger, K., Klier, J., Klier, M., Probst, F.: A review of information systems research on online social networks. Commun. Assoc. Inf. Syst. 35(8), 145–172 (2014)

    Google Scholar 

  • Beard, M., Longstaff, S.: Ethics by design: principles for good technology. The Ethics Center (2018)

    Google Scholar 

  • Blodgett et al.: Language (technology) is power: a critical survey of bias in NLP. Computational and Language (2020)

    Google Scholar 

  • Canetti, R., et al.: From soft classifiers to hard decisions: how far can we be? In: Processing of the Conference on Fairness, Accountability, and Transparency, pp. 309–318 (2019)

    Google Scholar 

  • Croeser, S., Eckersley, P.: Theories of parenting and their application to artificial intelligence. Computers and Society (2019)

    Google Scholar 

  • Crawford, K.: A.I.’s White Guy Problem. (Sunday Review Desk) (OPINION). The New York Times (2016)

    Google Scholar 

  • Dwivedi, Y.K., et al.: Artificial intelligence (AI): multidisciplinary perspective on emerging challenges, opportunities, and agenda for research, practice, and policy. Int. J. Inf. Manag. 57, 101994 (2019)

    Google Scholar 

  • Daugherty, P., Wilson, H., Chowdhury, R.: Using artificial intelligence to promote diversity. MIT Sloan Manag. Rev. 60, 1 (2018)

    Google Scholar 

  • Dawson, D.: Artificial Intelligence: Australia’s Ethics [17] Framework. Data61 CSIRO, Australia (2019)

    Google Scholar 

  • Edwards, J.S., Rodriguez, E.: Remedies against bias in analytics systems. J. Bus. Anal. 2(1), 74–87 (2019)

    Google Scholar 

  • Feast, J.: 4 ways to address gender bias in AI. Harvard Business Review (2019)

    Google Scholar 

  • Font, J., Costa-Jussa, M.R.: Equalizing gender biases in neural machine translation with word embedding techniques. Computational and Language (2019)

    Google Scholar 

  • Florentine, S.: How artificial intelligence can eliminate bias in hiring. CIO (2016)

    Google Scholar 

  • Galleno, A., Krentz, M., Tsusaka, M., Yousif, N.: How AI could help or hinder women in the workforce. Boston Consulting Group (2019)

    Google Scholar 

  • Gonen, H., Goldberg, Y.: Lipstick on a pig: debasing methods cover up systematic gender bias in words embeddings but do not remove them. In: Proceeding of Association for Computational Linguistics, Minnesota (2019)

    Google Scholar 

  • Gummadi, K.P., Heidari, H.: Economic theories of distributive justice for fair machine learning. In: Companion Proceedings of 2019 Worldwide Web Conference (2019)

    Google Scholar 

  • Holstein, K., et al.: Improving fairness in machine learning systems: what do industry practitioners need? In: ACM CHI Conference on Human Factors in Computing Sciences (2019)

    Google Scholar 

  • Huang, J., et al.: Historical comparison of gender inequality in scientific careers across countries and disciplines. Proc. Nat. Acad. Sci. U.S.A. 117, 4609–4616 (2020)

    Google Scholar 

  • Ivaturi, K., Bhagwatwar, A.: Mapping sentiments to themes of customer reactions on social media during a security hack: a justice theory perspective. Inf. Manag. 57(4), 103218 (2020)

    Google Scholar 

  • Jobin, A., Lenca, M., Vayena, E.: The global landscape of AI ethics guidelines. Nat. Mach. Intell. 1, 389–399 (2019)

    Google Scholar 

  • Wolfswinkel, J.F., Furtmueller, E., Wilderom, C.P.M.: Using grounded theory as a method for rigorously reviewing literature. Eur. J. Inf. Syst. 22(1), 45–55 (2013)

    Google Scholar 

  • Kumar, G., Singh, G., Bhatanagar, V.: Scary side of artificial intelligence: a perilous contrivance to mankind. Humanit. Soc. Sci. Rev. 7(5), 1097–1103 (2019)

    Google Scholar 

  • Kulik, C.T., Lind, E.A., Ambrose, M.L., Maccoun, R.J.: Understanding gender differences in distributive and procedural justice. Soc. Justice Res. 9(4), 351–369 (1996). https://doi.org/10.1007/BF02196990

    Article  Google Scholar 

  • Leavy, S.: Gender bias in artificial intelligence: the need for diversity and gender theory in machine learning. In: 2018 IEEE/ACM First International Workshop on Gender Equality in Software Engineering, Gothenburg, Sweden (2018)

    Google Scholar 

  • Lambrecht, A., Tucker, C.: Algorithmic bias? An empirical study into apparent gender-based discrimination in the display of STEM career ads (2018). https://ssrn.com/abstract=2852260, http://dx.doi.org/10.2139/ssrn.2852260

  • Mehrabi, N., Morstatter, F., Saxena, N., Lerman, K., Galstyan, A.: A survey on bias and fairness in machine learning. Comput. Sci.- Mach. Learn. https://arxiv.org/abs/1908.09635 (2013)

  • Mikolov, T., et al.: Distributed representation of words and phrases and their compositionality. In: Proceeding of the 26th International Conference on Neural Information Processing Systems, vol. 2, pp. 3111–3119 (2013)

    Google Scholar 

  • Parsheera, S.: A gendered perspective on artificial intelligence. In: Machine Learning for a 5G Future (2018)

    Google Scholar 

  • Parikh, R.B., Teeple, S., Navathe, A.M.: Addressing bias in artificial intelligence in health care. JAMA 322, 2377–2378 (2019)

    Google Scholar 

  • Robnett, R.D.: Gender bias in STEM fields: variation in prevalence and links to STEM self-concept. Psychol. Women Q. 40, 65–79 (2015)

    Google Scholar 

  • Ridley, G., Young, J.: Theoretical approaches to gender and IT: examining some Australian evidence. Inf. Syst. J. 22(5), 355–373 (2012)

    Google Scholar 

  • Srivastava, B., Rossi, F.: Towards compostable bias rating of AI service. In: AAAI/ACM Conference on AI, Ethics and Society, New Orleans, Louisiana (2018)

    Google Scholar 

  • Sen, A.: Gender inequalities and theory of justice. In: Nussbaum, M., Glover, J. (eds.) Women, Culture, and Development: A Study of Human Capabilities. Oxford University Press, New York (1995)

    Google Scholar 

  • Sun, T., et al.: Mitigating gender bias in natural language processing: a literature review. In: 57th Proceeding of Association for Computational Linguistics, Italy (2019)

    Google Scholar 

  • Trewin, S.: AL fairness for people with disabilities: point of view. Computer science (2018)

    Google Scholar 

  • Terrell, J., et al.: Gender differences and bias in open source: pull request acceptance of women versus men. Peer J. Comput. Sci. 3, e111 (2017)

    Google Scholar 

  • Myer, M.D., Newman, M.: The qualitative interview in IS research: examining the craft. Inf. Manag. 7(10), 2–26 (2007)

    Google Scholar 

  • Mergel, I., Edelmann, N., Haug, N.: Defining digital transformation: results from the expert interview. Gov. Inf. Q. 36(4), 101385 (2019)

    Google Scholar 

  • Nadeem, A., Abedin, B., Marjanovic, O.: Gender bias in AI: a review of contributing factors and mitigating strategies. In: ACIS 2020 Proceedings 27 (2020). https://aisel.aisnet.org/acis2020/27/

  • Zhao, J., et al.: Gender bias in contextualization word embedding. In: Proceeding of Association for Computational Linguistics, Minnesota (2019)

    Google Scholar 

  • Zhong, Z.: A tutorial on fairness in machine learning. Towards data science (2018)

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Ayesha Nadeem .

Editor information

Editors and Affiliations

Appendices

Appendices

Table 1. Managerial practices for mitigating gender bias in AI

Rights and permissions

Reprints and permissions

Copyright information

© 2021 IFIP International Federation for Information Processing

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Nadeem, A., Marjanovic, O., Abedin, B. (2021). Gender Bias in AI: Implications for Managerial Practices. In: Dennehy, D., Griva, A., Pouloudi, N., Dwivedi, Y.K., Pappas, I., Mäntymäki, M. (eds) Responsible AI and Analytics for an Ethical and Inclusive Digitized Society. I3E 2021. Lecture Notes in Computer Science(), vol 12896. Springer, Cham. https://doi.org/10.1007/978-3-030-85447-8_23

Download citation

  • DOI: https://doi.org/10.1007/978-3-030-85447-8_23

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-030-85446-1

  • Online ISBN: 978-3-030-85447-8

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics