Skip to main content
Log in

AI employment decision-making: integrating the equal opportunity merit principle and explainable AI

  • OPEN FORUM
  • Published:
AI & SOCIETY Aims and scope Submit manuscript

Abstract

Artificial intelligence (AI) tools used in employment decision-making cut across the multiple stages of job advertisements, shortlisting, interviews and hiring, and actual and potential bias can arise in each of these stages. One major challenge is to mitigate AI bias and promote fairness in opaque AI systems. This paper argues that the equal opportunity merit principle is an ethical approach for fair AI employment decision-making. Further, explainable AI can mitigate the opacity problem by placing greater emphasis on enhancing the understanding of reasonable users (employing organisations) and affected persons (employees and job candidates) as to the AI output. Both the equal opportunity merit principle and explainable AI should be integrated in the design and implementation of AI employment decision-making systems so as to ensure, as far as possible, that the AI output is arrived at through a fair process.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Institutional subscriptions

Similar content being viewed by others

Notes

  1. Miranda Bogen and Aaron Rieke, “Help Wanted: An Examination of Hiring Algorithms, Equity and Bias”, December 2018 at p. 35; see also https://www.inc.com/minda-zetlin/ai-is-now-analyzing-candidates-facial-expressions-during-video-job-interviews.html.

  2. Jeffrey Dastin, “Amazon scraps secret AI recruiting tool that showed bias against women” Business News, 10 October 2018 at https://www.reuters.com/article/us-amazon-com-jobs-automation-insight-idUSKCN1MK08G.

  3. “ACLU Says Facebook Ads let Employers Favor Men Over Women”, WIRED, 18 Sept 2018.

  4. https://www.forbes.com/sites/patriciagbarnes/2020/02/03/group-asks-federal-trade-commission-to-regulate-use-of-artificial-intelligence-in-pre-employment-screenings/#7930fa932b54, and https://epic.org/privacy/ftc/hirevue/.

  5. https://standards.ieee.org/project/7003.html.

  6. The other principles are an individual’s claim to a set of equal basic liberties and the difference principle that socioeconomic inequalities are for the greatest benefit of the least advantaged members of society: Rawls (2001, p. 42).

  7. The Rawlsian set of primary goods includes rights, liberties, income, opportunities and wealth.

  8. This means that “the demographics of the set of individuals receiving any classification are the same as the demographics of the underlying population”.

  9. The terms “linear, monotonic” means “for a change in any given input variable (or sometimes combination or function of an input variable), the output of the response function changes at a defined rate, in only one direction, and at a magnitude represented by a readily available coefficient.”.

  10. Civil Action H-14-1189.

  11. Recital 71. Regulation 2016/679 of the European Parliament and of the Council of 27 April 2016 on the Protection of Natural Persons with Regard to the Processing of Personal Data and on the Free Movement of Such Data, and Repealing Directive 95/46/EC, 2016 O.J. (L 119) 1 (EU).

  12. The scope covers “any form of automated processing of personal data evaluating the personal aspects relating to a natural person, in particular to analyse or predict aspects concerning the data subject’s performance at work” amongst others.

  13. Tripartite Guidelines on Fair Employment Practices by the Tripartite Alliance for Fair & Progressive Employment Practices at https://www.tal.sg/tafep/Getting-Started/Fair/Tripartite-Guidelines.

  14. http://www.ilo.org/dyn/normlex/en/f?p=NORMLEXPUB:12100:0::NO::P12100_ILO_CODE:C111, Articles 1 and 2. A total of 175 countries have ratified the Convention as of September 2021.

  15. Title VII of the Civil Rights Act of 1964, as amended by the Civil Rights Act of 1991, 42 U. S. C. §§2000e–2(a).

  16. See Griggs v. Duke Power Co. 401 US 424 (1971); and United Steelworkers of America v Weber 443 US 193 (1979).

  17. The UK Equality and Human Rights Commission promotes equal opportunities at the workplace under the Equality Act 2010: https://www.eoc.org.uk/.

  18. https://www.eoc.org.hk/en/about-the-eoc/introduction-to-eoc.

  19. The Human Rights Commission under the NZ Human Rights Act 1993 at https://www.hrc.co.nz/about/vision-mission-values-and-statutory-responsibilities/. See also the Employment Relations Act 2000.

  20. See Singapore’s Tripartite Guidelines on Fair Employment Practices by the Tripartite Alliance for Fair & Progressive Employment Practices at https://www.tal.sg/tafep/Getting-Started/Fair/Tripartite-Guidelines.

  21. UK Equality Act 2010.

  22. https://pair-code.github.io/what-if-tool/.

References

  • Altman M, Wood A, Vayena E (2018) A harm-reduction framework for algorithmic fairness. IEEE Secur Priv 16(3):34–45

    Article  Google Scholar 

  • Ajunwa I (2020a) The “black box” at work. Big Data Soc 7(2):1–6

  • Ajunwa I (2020b) The paradox of automation as anti-bias intervention, 41 Cardozo L Rev 1671

  • Arneson RJ (1989) Equality and equal opportunity for welfare”. Philos Stud 56(1):77–93

    Article  Google Scholar 

  • Arthur W, Bell ST, Villado AJ, Doverspike D (2006) The use of person organization fit in employment decision making: an assessment of its criterion-related validity. J Appl Psychol 91(4):786–801

    Article  Google Scholar 

  • Barocas S, Selbst A (2016) Big data’s disparate impact. Calif Law Rev 104(3):671–732

    Google Scholar 

  • Baum K, Mantel S, Schmidt E, Speith T (2022) From Responsibility to reason-giving explainable artificial intelligence. Philos Technol 35:12

    Article  Google Scholar 

  • Bellamy RKE, et al (2018) AI Fairness 360: an extensible toolkit for detecting, understanding, and mitigating unwanted algorithmic bias. https://arxiv.org/abs/1810.01943

  • Binns R (2018) Fairness in machine learning: lessons from political philosophy. Proc Mach Learn Res 81:1–11

    Google Scholar 

  • Bogen M, Rieke A (2018) Help wanted: an examination of hiring algorithms, equity and bias. https://www.upturn.org/static/reports/2018/hiring-algorithms/files/Upturn%20--%20Help%20Wanted%20-%20An%20Exploration%20of%20Hiring%20Algorithms,%20Equity%20and%20Bias.pdf. Accessed 16 June 2022

  • Burrell J (2016) How the machine “thinks”: understanding opacity in machine learning algorithms. Big Data Soc 3:1

    Article  Google Scholar 

  • Calders, T & Zliobaite, I (2013) Why unbiased computational processes can lead to discriminative decision procedures. In: Discrimination and privacy in the information society (Vol 3, pp 43–57). (Studies in Applied Philosophy, Epistemology and Rational Ethics). Springer. https://doi.org/10.1007/978-3-642-30487-3_3

  • Chalfin A, Danieli O, Hillis A, Jelveh Z, Luca M, Ludwig J, Mullainathan S (2016) Productivity and selection of human capital with machine learning. Am Econ Rev 106(5):124–127

    Article  Google Scholar 

  • Chamorro-Premuzic T, Akhtar R (2019) Should companies use AI to assess job candidates? https://hbr.org/2019/05/should-companies-use-ai-to-assess-job-candidates. Accessed 16 June 2022

  • Cohen GA (2011) On the currency of egalitarian justice and other essays in Political Philosophy. Princeton University Press

    Book  Google Scholar 

  • Colaner N (2021) Is explainable artifcial intelligence intrinsically valuable? AI & Soc. https://doi.org/10.1007/s00146-021-01184-2

    Article  Google Scholar 

  • Corbett-Davies, S and Goel, S (2018). The Measure and Mismeasure of Fairness: A Critical Review of Fair Machine Learning. https://arxiv.org/abs/1808.00023

  • Davis JL, Williams A, Yang MW (2021) Algorithmic reparation. Big Data Soc 8(2):1–12

  • Doshi-Velez, F., and Kortz, M. (2017). Accountability of AI under the law: the role of explanation. Berkman Klein Center Working Group on Explanation and the Law, Berkman Klein Center for Internet & Society working paper. https://arxiv.org/abs/1711.01134

  • Dwork C Hardt M, Pitassi T, Reingold O, Zeme RS (2012) Fairness through awareness. Proceedings in 3rd Innovations in Theoretical Computer Science. Cambridge, MA, USA, January 8–10, 214–226

  • Dworkin R (2000) Sovereign Virtue: the theory and practice of equality. Harvard University Press, Cambridge

    Google Scholar 

  • Floridi L, Cowls J, Beltrametti M, Chatila R, Chazerand P, Dignum V, Lutge C, Madelin R, Pagallo U, Rossi F, Schafer B, Valcke P, Vayena E (2018) AI4People—an ethical framework for a good AI society: opportunities, risks, principles, and recommendations. Mind Mach 28:689–707. https://doi.org/10.1007/s11023-018-9482-5

    Article  Google Scholar 

  • Friedman B, Nissenbaum H (1996) Bias in computer systems. ACM Transact Inform Syst 14(3):330–347

    Article  Google Scholar 

  • Greenwald AG, Banaji MR (1995) Implicit social cognition: attitudes, self-esteem, and stereotypes. Psychol Rev 102(1):4–27

    Article  Google Scholar 

  • Greenwald A, Krieger LH (2006) Implicit bias: scientific foundations. Calif Law Rev 94(4):945–967

    Article  Google Scholar 

  • Hall P, Gill N (2018) An introduction to machine learning interpretability. Sebastopol, CA: O'Reilly Media

  • Harrison DA, Kravitz DA, Mayer DM, Leslie LM, Lev-Arey D (2006) Understanding attitudes toward affirmative action programs in employment: summary and meta-analysis of 35 years of research. J Appl Psychol 91(5):1013–1036

    Article  Google Scholar 

  • Heinrichs B (2021) Discrimination in the age of artificial intelligence. AI Soc. https://doi.org/10.1007/s00146-021-01192-2

    Article  Google Scholar 

  • Hilliard A, Kazim E, Koshiyama A, Zannone S, Trengove M, Kingsman N, Polle R (2022) Regulating the robots: NYC mandates bias audits for Ai-driven employment decisions (April 13, 2022). Available at SSRN: https://ssrn.com/abstract=4083189 or https://doi.org/10.2139/ssrn.4083189. Accessed 16 June 2022

  • Holmes E (2005) Anti-discrimination rights without equality. Mod Law Rev 68(2):175–194

    Article  Google Scholar 

  • Houser KA (2019) Can AI solve the diversity problem in the tech industry: mitigating noise and bias in employment decision-making. Stanford Technol Law Rev 22:290

    Google Scholar 

  • Jayaratne M, Jayatilleke B (2020) Predicting personality using answers to open-ended interview questions. IEEE Access 8:115345–115355. 10. 1109/ACCESS.2020.3004002

  • Kim J-Y, Heo WG (2022) Artificial intelligence video interviewing for employment: perspectives from applicants, companies, developer and academicians. Inf Technol People 35(3):861–878

    Article  Google Scholar 

  • Kroll JA, Huey J, Barocas S, Felten EW, Reidenberg JR, Robinson DG, Yu H (2017) Accountable algorithms. Univ Pa Law Rev 165:633–707

    Google Scholar 

  • Kusner MJ, Loftus JR, Russell C et al (2017) Counterfactual fairness. https://arxiv.org/abs/1703.06856

  • Lee MSA, Floridi L, Singh J (2021) Formalising trade-offs beyond algorithmic fairness: lessons from ethical philosophy and welfare economics. AI Ethics. https://doi.org/10.1007/s43681-021-00067-y

    Article  Google Scholar 

  • Lipton P (1990) Contrastive explanation. R Inst Philos Suppl 27:247–266

    Article  Google Scholar 

  • Miller T (2018) Contrastive explanation: a structural-model approach. https://arxiv.org/abs/1811.03163

  • Mittelstadt B, Russell C, Wachter S (2019) Explaining explanations in AI. In FAT* ’19: Conference on Fairness, Accountability, and Transparency (FAT* ’19), January 29–31, 2019, Atlanta, GA, USA. ACM, New York, NY, USA. https://doi.org/10.1145/3287560.3287574

  • Morley J, Floridi L, Kinsey L, Elhalal A (2020) From what to how: an initial review of publicly available AI ethics tools, methods and research to translate principles into practices. Sci Eng Ethics 26(4):2141–2168. https://doi.org/10.1007/s11948-019-00165-5

    Article  Google Scholar 

  • OECD (2019) Recommendation of the Council on Artificial Intelligence. Retrieved from https://legalinstruments.oecd.org/en/instruments/OECD-LEGAL-0449. Accessed 1 June 2022

  • Poel I (2020) Embedding values in artificial intelligence (AI) systems. Mind Mach 30(3):385–409. https://doi.org/10.1007/s11023-020-09537-4

    Article  Google Scholar 

  • Raghavan M, Barocas S, Kleinberg J, Levy K (2019) Mitigating bias in algorithmic employment screening: evaluating claims and practices. https://arxiv.org/pdf/1906.09208.pdf

  • Rawls J (1971) A theory of justice. Oxford University Press

    Book  Google Scholar 

  • Rawls J (1999) The law of peoples. Harvard University Press

    Google Scholar 

  • Rawls J (2001) Justice as fairness: a restatement. The Belknap Press of Harvard University Press

  • Ribeiro MT, Singh S, Guestrin C (2016) Why Should I Trust You? Explaining the Predictions of Any Classifier. In Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining. ACM Press, 1135–1144

  • Robbins S (2019) A misdirected principle with a catch: explicability for AI. Mind Mach 29:495–514

    Article  Google Scholar 

  • Roemer J (2000) Equality of opportunity. Harvard University Press

    MATH  Google Scholar 

  • Romei A, Ruggieri S (2014) A multidisciplinary survey on discrimination analysis. Knowledge Eng Rev 29(5):582–638

  • Ryan M (2020) In AI we trust: ethics, artificial intelligence, and reliability. Sci Eng Ethics 26:2749–2767. https://doi.org/10.1007/s11948-020-00228-y

    Article  Google Scholar 

  • Sandel MJ (2021) The Tyranny of Merit—What’s Become of the Common Good? Penguin Random House UK

  • Selbst AD, Barocas S (2018) The intuitive appeal of explainable machines. Fordham Law Rev 87:1085

    Google Scholar 

  • Selbst AD, Powles J (2017) Meaningful information and the right to explanation. Int Data Privacy Law 7(4):233–242

    Article  Google Scholar 

  • Sen A (1992) Inequality examined. Harvard University Press, Cambridge Massachusetts

    Google Scholar 

  • Sekiguchi T, Huber VL (2011) The use of person–organization fit and person–job fit information in making selection decisions. Organ Behav Hum Decis Process 116:203–216

    Article  Google Scholar 

  • Sinclair A, Carlsson R (2021) Reactions to affirmative action policies in hiring: Effects of framing and beneficiary gender. Anal Soc Issues Public Policy 21:660–678

    Article  Google Scholar 

  • Singapore Academy of Law (SAL) (Law Reform Committee), sub-committee on Robotics and Artificial Intelligence. (2020). Applying Ethical Principles for Artificial Intelligence in Regulatory Reform

  • Tambe P, Cappelli P, Yakubovich V (2019) Artificial intelligence in human resources management: challenges and a path forward. Calif Manage Rev 61(4):15–42

    Article  Google Scholar 

  • Temkin LS (2016) The many faces of equal opportunity. Theory Res Educ 14(3):255–276

    Article  Google Scholar 

  • Tippins N, Oswald F, McPhail SM (2021) Scientific, legal, and ethical concerns about AI-based personnel selection tools: a call to action. Personnel Assessment Decisions. https://doi.org/10.25035/pad.2021.02.001

  • Tubella AA, Theodorou A, Dignum F, Dignum V (2019) Governance by glass-box: implementing transparent moral bounds for AI behaviour. https://arxiv.org/abs/1905.04994

  • Wachter S, Mittelstadt B, Russell C (2018) Counterfactual explanations without opening the black box: automated decisions and the GDPR. Harv J Law Technol 31:841

    Google Scholar 

  • Yao YH (2021) Explanatory pluralism in explainable AI. https://arxiv.org/abs/2106.13976

Download references

Acknowledgements

This research is supported by the National Research Foundation, Singapore under its Emerging Areas Research Projects (EARP) Funding Initiative. Any opinions, findings and conclusions or recommendations expressed in this material are those of the author’s and do not reflect the views of National Research Foundation, Singapore.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Gary K Y Chan.

Ethics declarations

Conflict of interest

There is no conflict of interests.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Chan, G.K.Y. AI employment decision-making: integrating the equal opportunity merit principle and explainable AI. AI & Soc (2022). https://doi.org/10.1007/s00146-022-01532-w

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1007/s00146-022-01532-w

Keywords

Navigation