Abstract
Over the years, companies have adopted hiring algorithms because they promise wider job candidate pools, lower recruitment costs and less human bias. Despite these promises, they also bring perils. Using them can inflict unintentional harms on individual human rights. These include the five human rights to work, equality and nondiscrimination, privacy, free expression and free association. Despite the human rights harms of hiring algorithms, the AI ethics literature has predominantly focused on abstract ethical principles. This is problematic for two reasons. First, AI principles have been criticized for being vague and not actionable. Second, the use of vague ethical principles to discuss algorithmic risks does not provide any accountability. This lack of accountability creates an algorithmic accountability gap. Closing this gap is crucial because, without accountability, the use of hiring algorithms can lead to discrimination and unequal access to employment opportunities. This paper makes two contributions to the AI ethics literature. First, it frames the ethical risks of hiring algorithms using international human rights law as a universal standard for determining algorithmic accountability. Second, it evaluates four types of algorithmic impact assessments in terms of how effectively they address the five human rights of job applicants implicated in hiring algorithms. It determines which of the assessments can help companies audit their hiring algorithms and close the algorithmic accountability gap.
Similar content being viewed by others
Notes
Mittelstadt (2019) documents more than 60 such initiatives.
Beyond recruitment, companies are also using AI-based HR tools to predict and develop effective corporate wellness initiatives for current employees to drive employee engagement and well-being. See Ajunwa, Crawford, & Ford (2016) for an overview.
Textio claims that its “gender tone meter” identifies overly masculine tones job postings. Its website demo explains how it works using a typical job posting which reads: “We focus on customers, collaboration, and excellence.” The word “excellence” is then flagged as a “fixed mindset phrase which emphasizes raw talent over growth and will hurt inclusion efforts”. Description retrieved from: https://textio.com/products/ on 20 January 2021.
Arya claims to harness data from social media sites and “50 + other relevant professional sources to create meaningful insights and a deeper intelligence of each candidate” (Raub, 2018).
Mya, a “conversational AI” promises to “make hiring more human”. It requests applicants to provide additional qualification information which it independently evaluates. It then automatically schedules qualified applicants for a phone screening interview with a human recruiter. Descriptions retrieved from: https://www.mya.com/meetmya/ on 20 January 2021.
The Cappfinity platform offers the “Koru7 Impact Skills” which tests on “the seven soft skills that every employer is looking for in their best-fit hires: Grit, Rigour, Impact, Teamwork, Curiosity, Ownership and Polish.” After completing a 20- minute assessment, an applicant is given a “fit score” and a “Koru7 impact skills profile” which predict their future job performance. If the scores on “fit” or “generosity” or if the “Koru7 impact skills profile” meet pre-determined standards, then the applicant is invited to a formal interview. Descriptions retrieved from: https://www.cappfinity.com/koru/ on 20 January 2021.
Pymetrics claims to collect “objective behavioral data that measures a job seeker's true potential.” It provides 12 interactive “gamified assessments” which purport to measure attention, effort, fairness, decision making, emotion, focus, generosity, learning and risk tolerance. It then develops tailored interview questions based on these assessments. Descriptions retrieved from: https://www.pymetrics.ai/science on 20 January 2021.
In January 2021, as a result of a public backlash, HireVue announced that it would cease using facial analysis. See, for example, https://www.wired.com/story/job-screening-service-halts-facial-analysis-applicants/
Fama promises to “identify problematic behavior before it becomes an issue” by providing “background checks for the twenty-first century”. It flags instances of misogyny, bigotry, racism, violence and criminal behavior based on an applicant’s social media activities. Descriptions retrieved from https://fama.io/product/ on 14 January 2021.
Beqom uses machine learning to “optimize compensation models and incentives plans,” while also claiming to keep compensation fair and close pay gaps. Descriptions retrieved from https://www.beqom.com/artificial-intelligence-driven-compensation on 14 January 2021.
As we discuss in the following section, the speed and scale at which hiring algorithms perform their tasks may give rise to potential harms in some contexts.
Following Burrell (2016), we can distinguish between three forms of opacity in these kinds of cases: (1) Opacity due to technical illiteracy (e.g., hiring managers insufficiently versed in statistics and thus unable to properly interpret outputs of a predictive hiring model); (2) Opacity due to competitive advantage (e.g., companies treating their data and algorithms in proprietary personality assessments as trade secrets thus limiting a third party from assessing the internal, external and construct validity of such assessments); and (3) Opacity due to fundamental representational capacities of some machine learning models (e.g., deep neural networks used in facial analysis extracting high-level representations incomprehensible to human semantic understanding thus undermining the ability of humans to explain why or how a given classification was reached).
Even if hiring algorithms become transparent and intelligible, companies may nevertheless be unwilling to disclose such information to protect their confidential intellectual property rights over such hiring algorithms (Katyal, 2019).
The infamous Amazon talent recruitment case is a classic example of AI bias. In 2015, Amazon created hiring algorithms to scout for software engineers. They used training data based on the resumes of their top software engineers over the past decade (Houser, 2019). Because 80 percent of Amazon’s top software engineers were male, the algorithms were primarily trained with their resumes. Even if the algorithms were explicitly designed not to use gender as a factor, they recognized patterns in the resumes which had “verbs more commonly found on male engineers’ resumes, such as ‘executed’ and ‘captured’” (Dastin, 2018). Similarly, Amazon’s hiring algorithms may have rejected most female applicants because their resumes contained the term ‘women’s’ due to joining a women’s tennis club or studying at an all-women’s college (Tambe et al., 2019). When Amazon discovered these discriminatory results, it stopped using such hiring algorithms immediately.
To illustrate, Hootsuite, the leading social media management platform, claims to “empower employees to share posts across their own social networks” under the heading of “employee advocacy”. However, it should be done in a way that “reduces risk of non-compliant or off-brand posts by providing only approved messages for staff to share.” Retrieved from https://www.hootsuite.com/solutions/employee-advocacy on 15 February 2021.
This is an actual case, reported by Quartz, where the name “Jared” and “playing lacrosse” were identified by a hiring algorithm as top predictors of job success. See: https://qz.com/1427621/companies-are-on-the-hook-if-their-hiring-algorithms-are-biased/
Thanks to Don Dedrick for helpful feedback on an early draft of this manuscript.
References
Ajunwa, I., & Schlund, R. (2020). Algorithms and the social organization of work. The Oxford Handbook of Ethics of AI. https://doi.org/10.1093/oxfordhb/9780190067397.013.52
Ajunwa, I., Crawford, K., & Ford, J. S. (2016). Health and big data: An ethical framework for health information collection by corporate wellness programs. The Journal of Law, Medicine & Ethics, 44(3), 474–480.
Algorithmic Accountability Act of 2019, S. 1108, H.R. 2231, 116th Cong. (2019) https://www.congress.gov/bill/116th-congress/house-bill/2231/all-info.
Amnesty International. (2019). Surveillance Giants: How The Business Model of Google and Facebook Threatens Human Rights (p. 60). Amnesty International.
Arneson, R. (2015). Equality of Opportunity. In E. N. Zalta (Ed.), The Stanford Encyclopedia of Philosophy (Summer 2015). California: Metaphysics Research Lab, Stanford University.
Binns, R. (2017). Data protection impact assessments: A meta-regulatory approach. International Data Privacy Law, 7(1), 22–35. https://doi.org/10.1093/idpl/ipw027
Bogen, M., & Rieke, A. (2018). Help Wanted—An Exploration of Hiring Algorithms, Equity and Bias. (p. 75). Upturn. https://www.upturn.org/static/reports/2018/hiring-algorithms/files/Upturn%20--%20Help%20Wanted%20-%20An%20Exploration%20of%20Hiring%20Algorithms,%20Equity%20and%20Bias.pdf.
Burrell, J. (2016). How the machine ‘thinks’: Understanding opacity in machine learning algorithms. Big Data & Society, 3(1), 2053951715622512.
California Privacy Rights Act of 2020 (2020). https://iapp.org/media/pdf/resource_center/ca_privacy_rights_act_2020_ballot_initiative.pdf.
Chae, Y. (2020). U.S. AI regulation guide: legislative overview and practical considerations. The Journal of Robotics, Artificial Intelligence & Law, 3(1), 17–40.
Chaudhary, M. (2018, May 14). HireVue Acquires MindX to Create a Robust AI-Based Talent Assessment Suite. https://www.hrtechnologist.com/news/recruitment-onboarding/hirevue- acquires-mindx-to-create-a-robust-aibased-talent-assessment-suite/.
Chew, B., Rae, J., Manstof, J., & Degnegaard, S. (2020). Government Trends 2020: What are the most transformational trends in government today? (p. 88) [Deloitte Center for Government Insights]. Deloitte Consulting LLP. https://www2.deloitte.com/content/dam/Deloitte/lu/Documents/public-sector/lu- government-trends-2020.pdf.
Clarke, R. (2009). Privacy impact assessment: Its origins and development. Computer Law & Security Review, 25(2), 123–135. https://doi.org/10.1016/j.clsr.2009.02.002
Council of Europe. “Ad Hoc Committee On Artificial Intelligence (CAHAI) - Feasibility Study,” 2020. https://rm.coe.int/cahai-2020-23-final-eng-feasibility-study-/1680a0c6da.
Dastin, J. (2018, October 11). Insight—Amazon scraps secret AI recruiting tool that showed bias against women. Reuters. https://in.reuters.com/article/amazon-com-jobs-automation- idINKCN1MK0AH
Esteves, A. M., Factor, G., Vanclay, F., Götzmann, N., & Moreira, S. (2017). Adapting social impact assessment to address a project’s human rights impacts and risks. Environmental Impact Assessment Review, 67, 73–87. https://doi.org/10.1016/j.eiar.2017.07.001
Floridi, L., & Cowls, J. (2019). A unified framework of five principles for AI in society. Harvard Data Science Review. https://doi.org/10.1162/99608f92.8cd550d1
Gilmore, J. (2011). Expression as realization: speakers’ interests in freedom of speech. Law and Philosophy, 30(5), 517–539.
Gotzmann, N. (2017). Human rights impact assessment of business activities: key criteria for establishing a meaningful practice. Business and Human Rights Journal, 2(1), 87–108. https://doi.org/10.1017/bhj.2016.24
Gotzmann, N., Vanclay, F., & Seier, F. (2016). Social and human rights impact assessments: What can they learn from each other? Impact Assessment and Project Appraisal, 34(1), 14–23. https://doi.org/10.1080/14615517.2015.1096036
Gotzmann, N., Bansal, T., Wrzoncki, E., Veiberg, C. B., Tedaldi, J., & Høvsgaard, R. (2020). Human rights impact assessment guidance and toolbox | The Danish Institute for Human Rights. The Danish Institute for Human Rights. https://www.humanrights.dk/business/tools/human-rights-impact-assessment-guidance- toolbox.
Houser, K. (2019). Can AI Solve the Diversity Problem in the Tech Industry? Mitigating Noise and Bias in Employment Decision-Making. 65.
International Data Corporation. (2020, August 25). Worldwide Spending on Artificial Intelligence Is Expected to Double in Four Years, Reaching $110 Billion in 2024, According to New IDC Spending Guide. IDC: The Premier Global Market Intelligence Company. https://www.idc.com/getdoc.jsp?containerId=prUS46794720.
Information Commissioner’s Office. (2020, July 20). Data protection impact assessments. ICO - Guide to the General Data Protection Regulation (GDPR), Accountability and Governance; ICO. https://ico.org.uk/for-organisations/guide-to-data-protection/guide-to-the-general- data-protection-regulation-gdpr/data-protection-impact-assessments-dpias/.
International Association for Impact Assessment (IAIA). (2012). Fastips No. 1 Impact Assessment. IAIA.
Joh, E. (2017). Feeding the machine: policing, crime data, & algorithms symposium: Big data, national security, and the fourth amendment. William & Mary Bill of Rights Journal, 26(2), 287–302.
Johnson, K. (2021). What algorithm auditing startups need to succeed. VentureBeat. https://venturebeat.com/2021/01/30/what-algorithm-auditing-startups-need-to-succeed/.
Katyal, S. K. (2019). Private Accountability in the Age of Artificial Intelligence. UCLA Law Review, 66(1), 54–141.
Khan, A. N., Ihalage, A. A., Ma, Y., Liu, B., Liu, Y., & Hao, Y. (2021). Deep learning framework for subject-independent emotion detection using wireless signals. PLoS ONE, 16(2), e0242946.
Kim, P. T. (2016). Data-driven discrimination at work. William & Mary Law Review, 58(3), 857–936.
Krishnamurthy, V. (2018, October 10). It’s not enough for AI to be “ethical”; it must also be “rights respecting.” Berkman Klein Center for Internet & Society at Harvard University.https://medium.com/berkman-klein-center/its-not-enough-for-ai-to-be-ethical-it-must-also- be-rights-respecting-b87f7e215b97.
Kroll, J. (2020). Accountability in computer systems. The Oxford Handbook of Ethics of AI. https://doi.org/10.1093/oxfordhb/9780190067397.013.10
Latonero, M. (2018). Governing Artificial Intelligence: Upholding Human Rights & Dignity (p. 38). Data & Society. https://datasociety.net/library/governing-artificial-intelligence/.
Lim, M. (2013). Freedom of expression toolkit: A guide for students. United Nations Educational, Scientific and Cultural Organization. https://unesdoc.unesco.org/ark:/48223/pf0000218618.
LinkedIn Talent Solutions. LinkedIn Global Recruiting Trends 2018. (2018). LinkedIn Talent Solutions. https://business.linkedin.com/content/dam/me/business/en-us/talent- solutions/resources/pdfs/linkedin-global-recruiting-trends-2018-en-us2.pdf.
Mantelero, A. (2018). AI and Big Data: A blueprint for a human rights, social and ethical impact assessment. Computer Law & Security Review, 34(4), 754–772. https://doi.org/10.1016/j.clsr.2018.05.017
McGregor, L., Murray, D., & Ng, V. (2019). International human rights law as a framework for algorithmic accountability. International & Comparative Law Quarterly, 68(2), 309–343. https://doi.org/10.1017/S0020589319000046
Metcalf, J., Moss, E., Watkins, E. A., Singh, R., & Elish, M. C. (2021). Algorithmic Impact Assessments and Accountability: The Co-construction of Impacts. 19. https://papers.ssrn.com/sol3/papers.cfm?abstract_id=3736261.
Mittelstadt, B. (2019). AI ethics—too principled to fail? SSRN Electronic Journal. https://doi.org/10.2139/ssrn.3391293
Nahmias, Y., & Perel, M. (2020). The Oversight of Content Moderation by AI: Impact Assessments and Their Limitations. Harvard Journal on Legislation, 54. https://papers.ssrn.com/abstract=3565025.
Office of the Privacy Commissioner of Canada. (2020, November 12). A Regulatory Framework for AI: Recommendations for PIPEDA Reform. https://www.priv.gc.ca/en/about-the- opc/what-we-do/consultations/completed-consultations/consultation-ai/reg-fw_202011/.
O’Keefe, J., Moss, D. J., & Martinez, T. S. (2020, March 10). Mandatory “Bias Audits” and Special Notices to Job Candidates: New York City Aims to Regulate the Use of Artificial Intelligence in the Workplace. Law and the Workplace. https://www.lawandtheworkplace.com/2020/03/mandatory-bias-audits-and-special-notices- to-job-candidates-new-york-city-aims-to-regulate-the-use-of-artificial-intelligence-in-the- workplace/.
O’Neil Risk Consulting and Algorithmic Auditing (ORCAA). (2020). ORCAA’s Algorithmic Audit of HireVue—Description of Algorithmic Audit: Pre-built Assessments. https://www.hirevue.com/resources/orcaa-report.
Orwat, C. (2020). Risks of Discrimination through the Use of Algorithms (p. 122). Federal Anti- Discrimination Agency (FADA). www.antidiskriminierungsstelle.de.
Raab, C. (2020). Information privacy, impact assessment, and the place of ethics. Computer Law & Security Review, 37, 105404. https://doi.org/10.1016/j.clsr.2020.105404
Raghavan, M., Barocas, S., Kleinberg, J., & Levy, K. (2020). Mitigating bias in algorithmic hiring: Evaluating claims and practices. Proceedings of the 2020 Conference on Fairness, Accountability, and Transparency, https://doi.org/10.1145/3351095.3372828
Raji, I. D., Smart, A., White, R. N., Mitchell, M., Gebru, T., Hutchinson, B., Smith-Loud, J., Theron, D., & Barnes, P. (2020). Closing the AI accountability gap: Defining an end-to- end framework for internal algorithmic auditing. Proceedings of the 2020 Conference on Fairness, Accountability, and Transparency, https://doi.org/10.1145/3351095.3372873
Raso, F., Hilligoss, H., Krishnamurthy, V., Bavitz, C., & Kim, L. (2018). Artificial Intelligence & Human Rights: Opportunities & Risks (SSRN Scholarly Paper ID 3259344; p. 63).
Harvard University, Berkman Klein Center for Internet & Society. https://doi.org/10.2139/ssrn.3259344
Raub, M. (2018). Bots, bias and big data: artificial intelligence, algorithmic bias and disparate impact liability in hiring practices comment. Arkansas Law Review, 71(2), 529–570.
Research Centre of the Slovenian Academy of Sciences & Arts. (2017). Satori Policy Brief: Supporting ethics assessment in research and innovation (p. 8). European Commission. https://satoriproject.eu/media/SATORI-policy-brief-_2017_Supporting-ethics-assessment-_26-06-2017.pdf.
Robertson, K., Khoo, C., & Song, Y. (2020). To Surveil and Predict: A Human Rights Analysis of Algorithmic Policing in Canada. Citizen Lab and International Human Rights Program, University of Toronto. https://citizenlab.ca/wp-content/uploads/2020/09/To-Surveil-and- Predict.pdf.
Schellmann, H. (11 February 2021). Auditors are testing hiring algorithms for bias, but there’s no easy fix. MIT Technology Review. https://www.technologyreview.com/2021/02/11/1017955/auditors-testing-ai-hiring-algorithms-bias-big-questions-remain/.
Scherer, M. (2017). AI in HR: Civil rights implications of employers’ use of artificial intelligence and big data. Scitech Lawyer, 13(2), 12–15.
Tambe, P., Cappelli, P., & Yakubovich, V. (2019). Artificial intelligence in human resources management: challenges and a path forward. California Management Review, 61(4), 15–42. https://doi.org/10.1177/0008125619867910
The Leadership Conference Education Fund. (2020). Civil Rights Principles for Hiring Assessment Technologies (p. 6). https://civilrights.org/resource/civil-rights-principles-for- hiring-assessment-technologies/.
United Nations. (2012). The Corporate Responsibility To Respect Human Rights - An Interpretive Guide. https://www.ohchr.org/Documents/Publications/HR.PUB.12.2_En.pdf.
United Nations Human Rights Regional Office for Europe. (2018). Make A Difference: An Introduction to Human Rights (p. 205). United Nations.
Venkatasubramanian, S., & Alfano, M. (2020, January). The philosophical basis of algorithmic recourse. In Proceedings of the 2020 Conference on Fairness, Accountability, and Transparency (pp. 284–293). https://europe.ohchr.org/Documents/Publications/MakeADifference_EN.pdf.
Wright, D., & Friedewald, M. (2013). Integrating privacy and ethical impact assessments. Science and Public Policy, 40(6), 755–766. https://doi.org/10.1093/scipol/sct083
Yeung, K. (2018). A Study of the Implications of Advanced Digital Technologies (Including AI Systems) for the Concept of Responsibility Within a Human Rights Framework (SSRN Scholarly Paper ID 3286027). Social Science Research Network. https://papers.ssrn.com/abstract=3286027.
Zuloaga, L. (11 January 2021). Industry Leadership: New Audit Results and Decision on Visual Analysis. HireVue. https://www.hirevue.com/blog/hiring/industry-leadership-new-audit-results-and-decision-on-visual-analysis.
Funding
None.
Author information
Authors and Affiliations
Corresponding author
Ethics declarations
Disclosure
The authors report no conflict of interest.
Additional information
Publisher's Note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Rights and permissions
About this article
Cite this article
Yam, J., Skorburg, J.A. From human resources to human rights: Impact assessments for hiring algorithms. Ethics Inf Technol 23, 611–623 (2021). https://doi.org/10.1007/s10676-021-09599-7
Accepted:
Published:
Issue Date:
DOI: https://doi.org/10.1007/s10676-021-09599-7