Abstract
Artificial intelligence (AI) applications are widely employed nowadays in almost every industry impacting individuals and society. As many important decisions are now being automated by various AI applications, fairness is fast becoming a vital concern in AI. Moreover, the organizational applications of AI-enabled decision systems have exacerbated this problem by amplifying the pre-existing societal bias and creating new types of biases. Interestingly, the related literature and industry press suggest that AI systems are often biased towards gender. Specifically, AI hiring tools are often biased towards women. Therefore, it is an increasing concern to reconsider the organizational managerial practices for AI-enabled decision systems to bring fairness in decision making. Additionally, organizations should develop fair, ethical internal structures and corporate strategies and governance to manage the gender imbalance in AI recruitment process. Thus, by systematically reviewing and synthesizing the literature, this paper presents a comprehensive overview of the managerial practices taken in relation to gender bias in AI. Our findings indicate that managerial practices include: better fairness governance practices, continuous training on fairness and ethics for all stakeholders, collaborative organizational learning on fairness & demographic characteristics, interdisciplinary approach & understanding of AI ethical principles, Workplace diversity in managerial roles, designing strategies for incorporating algorithmic transparency and accountability & ensuring human in the loop. In this paper, we aim to contribute to the emerging IS literature on AI by presenting a consolidated picture and understanding of this phenomenon. Based on our findings, we indicate direction for future research in IS for the better development and use of AI systems.
Keywords
- Artificial Intelligence
- Machine learning
- Analytics
- Gender
- Fairness
This is a preview of subscription content, access via your institution.
Buying options
References
Hoffmann, A.L.: Where fairness fails: data, algorithms, and the limits of anti-discrimination discourse. Inf. Commun. Soc. 22(7), 900–915 (2019)
Grari, V., Ruf, B., Lamprier, S., Detyniecki, M.: Achieving fairness with decision tress: an adversarial approach. Data Sci. Eng. 5(2), 99–110 (2020). https://doi.org/10.1007/s41019-020-00124-2
Martinez, C.F., Fernandez, A.: AI and recruiting software: ethical and legal implications. J. Behav. Robot. 11(1), 199–216 (2020)
Costa, P., Ribas, L.: AI becomes her: discussing gender and artificial intelligence. J. Specul. Res. 17(1–2), 171–193 (2019)
Bellamy, R.K.E., et al.: AI fairness 360: an extensible toolkit for detecting, understanding, and mitigating unwanted algorithmic bias. J. Res. Dev. (2018)
Hayes, P., van de Poel, I., Steen, M.: Algorithms and values in justice and security. AI Soc. 35(3), 533–555 (2020). https://doi.org/10.1007/s00146-019-00932-9
Schonberger, D.: Artificial intelligence in healthcare: a critical analysis of the legal and ethical implications. Int. J. Law Inf. Technol. 27(2), 171–203 (2019)
Prates, M., Avelar, P., Lamb, L.C.: Assessing gender bias in machine translation – a case study with Google translate. Neural Comput. Appl. (2019)
Kyriazanos, D.M., Thanos, K.G., Thomopoulos, S.C.A.: Automated decisions making in airports checkpoints: bias detection toward smarter security and fairness. Automated security decision-making. IEEE Secur. Appl. 17(2), 8–16 (2019)
Johnson, K.N.: Automating the risk of bias. Georg. Wash. Law Rev. 87(6), 1214 (2019)
Lambrecht, A., Tucker, C.: Algorithmic bias? An empirical study of apparent gender bias discrimination in the display of STEM career ads. Manage. Sci. 65(7), 2966–2981 (2020)
Ntoutsi, E., et al.: Bias in data-driven artificial intelligence systems – an introductory survey. Data Min. Knowl. Disc. 10(3), e1356 (2019)
Ibrahim, S.A., Charlson, M.E., Neill, D.B.: Big data analytics and the structure for equity in healthcare: the promise and perils. Health Equity 4(1), 99–101 (2020)
Chen, I.Y., Szolovits, P., Ghassemi, M.: Can AI help reduce disparities in general medical and mental health care? AMA J. Ethics 22(2), 167–179 (2019)
Qureshi, B., Kamiran, F., Karim, A., Ruggieri, S., Pedreschi, D.: Causal inference for social discrimination reasoning. J. Intell. Inf. Syst. 54(2), 425–437 (2019). https://doi.org/10.1007/s10844-019-00580-x
Robert, L.P., Pierce, C., Marquis, L., Kim, S., Alahmad, R.: Designing fair AI for managing employees in organizations: a review, critique, and design agenda. Hum.-Comput. Interact. 35(5–6), 545–575 (2020)
Lee, N.T.: Detecting racial bias in algorithms and machine learning. J. Inf. Commun. Ethics Soc. 16(3), 252–260 (2018)
Martin, K.: Ethical implications and accountability of algorithms. J. Bus. Ethics 160, 835–850 (2019). https://doi.org/10.1007/s10551-018-3921-3
Wu, W., Huang, T., Gong, K.: Ethical principles and governance technology development of AI in China. Engineering 6, 302–309 (2019)
Piano, S.L.: Ethical principles in machine learning and artificial intelligence: cases from the field and possible ways forward. Humanit. Soc. Sci. Commun. 7, 1–7 (2020)
Miron, M., Tolan, S., Gómez, E., Castillo, C.: Evaluating causes of algorithmic bias in juvenile criminal recidivism. Artif. Intell. Law 29(2), 111–147 (2020). https://doi.org/10.1007/s10506-020-09268-y
Arrieta, et al.: Explainable Artificial Intelligence (XAI): Concepts, taxonomies, opportunities, and challenges towards responsible AI. Inf. Fusion 58, 82–115 (2020)
Feuerriegel, S., Dolata, M., Schwabe, G.: Fair AI: challenges and opportunities. Bus. Inf. Syst. Eng. 62(4), 379–384 (2020)
Veale, M., Binns, R.: Fairer machine learning in the real world: mitigating discrimination without collecting sensitive data. Big Data Soc. 4, 1–17 (2017)
Berk, R., Heidari, H., Jabbari, S., Kearns, M., Roth, A.: Fairness in criminal justice risk assessments: the state of the art. Sociol. Methods Res. 50, 3–44 (2018)
Thelwall, M.: Gender bias in machine learning for sentiment analysis. Online Inf. Rev. 42(3), 343–354 (2017)
Paulus, J.K., Kent, D.M.: Predictably unequal: understanding and addressing concerns that algorithmic clinical prediction may increase health disparities. Digit. Med. 99(3), 1–8 (2020)
Cirillo, D., et al.: Sex and gender differences and biases in artificial intelligence for biomedicine and healthcare. Digit. Med. 8(3), 1–11 (2020)
Noriega, M.: The application of artificial intelligence in police interrogations: an analysis addressing the proposed effect AI has on racial and gender bias, cooperation, and false confessions. Futures 117, 102510 (2020)
Wang, L.: The three harms of gendered technology. Australas. J. Inf. Syst. 24 (2020)
Ahn, Y., Lin, Y.R.: Fairsight: visual analytics for fairness in decision making. IEEE Trans. Vis. Comput. Graph. 26(1), 1086–1095 (2020)
Clifton, J., Glasmeier, A., Gray, M.: When machines think for us: the consequences for work and place. Camb. J. Reg. Econ. Soc. 13(1), 3–23 (2020)
Webster, J., Watson, R.T.: Analysing the past to prepare for the future: writing a literature review. MIS Q. 26(2), 3–23 (2002)
UNESDOC Digital library, Artificial intelligence, and gender equality: key finding of UNESCO’s global dialogue. https://unesdoc.unesco.org/ark:/48223/pf0000374174
Australian Academy of Science. (2019) Women in STEM Decadal Plan (Australian Academy of Science)
Agarwal, P.: Gender bias in STEM: women in tech still facing discrimination. Forbes (2020)
Altman, M., Wood, A., Vayena, E.: A harm-reduction framework for algorithmic fairness. IEEE Secur. Priv. 16, 34–45 (2018)
Bentivogli, L. et al.: Gender in danger? Evaluating speech translation technology on the Must- She Corpus. In: 58th Proceeding of Association for Computational Linguistic (2020)
Brunet, M.E., Houlihan. C.A., Anderson, C., Zemel, R.: Understanding the origins of bias in word embedding. In: Proceedings of the 36th International Conference on Machine Learning, Long Beach, California (2019)
Bolukbasi, T., Chang, K.W., Zou, J., Saligrama, V., Kalai, A.: Man is to computer programmer as women is to homemaker ? Debiasing word embeddings. Cornell University Computer Science and Artificial Intelligence (2016)
Bellamy, R.K.E., et al.: AL fairness 360: an extensible toolkit for detecting, understanding and mitigating unwanted algorithmic bias. Computer Science (2018)
Berger, K., Klier, J., Klier, M., Probst, F.: A review of information systems research on online social networks. Commun. Assoc. Inf. Syst. 35(8), 145–172 (2014)
Beard, M., Longstaff, S.: Ethics by design: principles for good technology. The Ethics Center (2018)
Blodgett et al.: Language (technology) is power: a critical survey of bias in NLP. Computational and Language (2020)
Canetti, R., et al.: From soft classifiers to hard decisions: how far can we be? In: Processing of the Conference on Fairness, Accountability, and Transparency, pp. 309–318 (2019)
Croeser, S., Eckersley, P.: Theories of parenting and their application to artificial intelligence. Computers and Society (2019)
Crawford, K.: A.I.’s White Guy Problem. (Sunday Review Desk) (OPINION). The New York Times (2016)
Dwivedi, Y.K., et al.: Artificial intelligence (AI): multidisciplinary perspective on emerging challenges, opportunities, and agenda for research, practice, and policy. Int. J. Inf. Manag. 57, 101994 (2019)
Daugherty, P., Wilson, H., Chowdhury, R.: Using artificial intelligence to promote diversity. MIT Sloan Manag. Rev. 60, 1 (2018)
Dawson, D.: Artificial Intelligence: Australia’s Ethics [17] Framework. Data61 CSIRO, Australia (2019)
Edwards, J.S., Rodriguez, E.: Remedies against bias in analytics systems. J. Bus. Anal. 2(1), 74–87 (2019)
Feast, J.: 4 ways to address gender bias in AI. Harvard Business Review (2019)
Font, J., Costa-Jussa, M.R.: Equalizing gender biases in neural machine translation with word embedding techniques. Computational and Language (2019)
Florentine, S.: How artificial intelligence can eliminate bias in hiring. CIO (2016)
Galleno, A., Krentz, M., Tsusaka, M., Yousif, N.: How AI could help or hinder women in the workforce. Boston Consulting Group (2019)
Gonen, H., Goldberg, Y.: Lipstick on a pig: debasing methods cover up systematic gender bias in words embeddings but do not remove them. In: Proceeding of Association for Computational Linguistics, Minnesota (2019)
Gummadi, K.P., Heidari, H.: Economic theories of distributive justice for fair machine learning. In: Companion Proceedings of 2019 Worldwide Web Conference (2019)
Holstein, K., et al.: Improving fairness in machine learning systems: what do industry practitioners need? In: ACM CHI Conference on Human Factors in Computing Sciences (2019)
Huang, J., et al.: Historical comparison of gender inequality in scientific careers across countries and disciplines. Proc. Nat. Acad. Sci. U.S.A. 117, 4609–4616 (2020)
Ivaturi, K., Bhagwatwar, A.: Mapping sentiments to themes of customer reactions on social media during a security hack: a justice theory perspective. Inf. Manag. 57(4), 103218 (2020)
Jobin, A., Lenca, M., Vayena, E.: The global landscape of AI ethics guidelines. Nat. Mach. Intell. 1, 389–399 (2019)
Wolfswinkel, J.F., Furtmueller, E., Wilderom, C.P.M.: Using grounded theory as a method for rigorously reviewing literature. Eur. J. Inf. Syst. 22(1), 45–55 (2013)
Kumar, G., Singh, G., Bhatanagar, V.: Scary side of artificial intelligence: a perilous contrivance to mankind. Humanit. Soc. Sci. Rev. 7(5), 1097–1103 (2019)
Kulik, C.T., Lind, E.A., Ambrose, M.L., Maccoun, R.J.: Understanding gender differences in distributive and procedural justice. Soc. Justice Res. 9(4), 351–369 (1996). https://doi.org/10.1007/BF02196990
Leavy, S.: Gender bias in artificial intelligence: the need for diversity and gender theory in machine learning. In: 2018 IEEE/ACM First International Workshop on Gender Equality in Software Engineering, Gothenburg, Sweden (2018)
Lambrecht, A., Tucker, C.: Algorithmic bias? An empirical study into apparent gender-based discrimination in the display of STEM career ads (2018). https://ssrn.com/abstract=2852260, http://dx.doi.org/10.2139/ssrn.2852260
Mehrabi, N., Morstatter, F., Saxena, N., Lerman, K., Galstyan, A.: A survey on bias and fairness in machine learning. Comput. Sci.- Mach. Learn. https://arxiv.org/abs/1908.09635 (2013)
Mikolov, T., et al.: Distributed representation of words and phrases and their compositionality. In: Proceeding of the 26th International Conference on Neural Information Processing Systems, vol. 2, pp. 3111–3119 (2013)
Parsheera, S.: A gendered perspective on artificial intelligence. In: Machine Learning for a 5G Future (2018)
Parikh, R.B., Teeple, S., Navathe, A.M.: Addressing bias in artificial intelligence in health care. JAMA 322, 2377–2378 (2019)
Robnett, R.D.: Gender bias in STEM fields: variation in prevalence and links to STEM self-concept. Psychol. Women Q. 40, 65–79 (2015)
Ridley, G., Young, J.: Theoretical approaches to gender and IT: examining some Australian evidence. Inf. Syst. J. 22(5), 355–373 (2012)
Srivastava, B., Rossi, F.: Towards compostable bias rating of AI service. In: AAAI/ACM Conference on AI, Ethics and Society, New Orleans, Louisiana (2018)
Sen, A.: Gender inequalities and theory of justice. In: Nussbaum, M., Glover, J. (eds.) Women, Culture, and Development: A Study of Human Capabilities. Oxford University Press, New York (1995)
Sun, T., et al.: Mitigating gender bias in natural language processing: a literature review. In: 57th Proceeding of Association for Computational Linguistics, Italy (2019)
Trewin, S.: AL fairness for people with disabilities: point of view. Computer science (2018)
Terrell, J., et al.: Gender differences and bias in open source: pull request acceptance of women versus men. Peer J. Comput. Sci. 3, e111 (2017)
Myer, M.D., Newman, M.: The qualitative interview in IS research: examining the craft. Inf. Manag. 7(10), 2–26 (2007)
Mergel, I., Edelmann, N., Haug, N.: Defining digital transformation: results from the expert interview. Gov. Inf. Q. 36(4), 101385 (2019)
Nadeem, A., Abedin, B., Marjanovic, O.: Gender bias in AI: a review of contributing factors and mitigating strategies. In: ACIS 2020 Proceedings 27 (2020). https://aisel.aisnet.org/acis2020/27/
Zhao, J., et al.: Gender bias in contextualization word embedding. In: Proceeding of Association for Computational Linguistics, Minnesota (2019)
Zhong, Z.: A tutorial on fairness in machine learning. Towards data science (2018)
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Appendices
Appendices
Rights and permissions
Copyright information
© 2021 IFIP International Federation for Information Processing
About this paper
Cite this paper
Nadeem, A., Marjanovic, O., Abedin, B. (2021). Gender Bias in AI: Implications for Managerial Practices. In: Dennehy, D., Griva, A., Pouloudi, N., Dwivedi, Y.K., Pappas, I., Mäntymäki, M. (eds) Responsible AI and Analytics for an Ethical and Inclusive Digitized Society. I3E 2021. Lecture Notes in Computer Science(), vol 12896. Springer, Cham. https://doi.org/10.1007/978-3-030-85447-8_23
Download citation
DOI: https://doi.org/10.1007/978-3-030-85447-8_23
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-030-85446-1
Online ISBN: 978-3-030-85447-8
eBook Packages: Computer ScienceComputer Science (R0)
-
Published in cooperation with
http://www.ifip.org/