Skip to main content

Advertisement

Log in

From human resources to human rights: Impact assessments for hiring algorithms

  • Original Paper
  • Published:
Ethics and Information Technology Aims and scope Submit manuscript

Abstract

Over the years, companies have adopted hiring algorithms because they promise wider job candidate pools, lower recruitment costs and less human bias. Despite these promises, they also bring perils. Using them can inflict unintentional harms on individual human rights. These include the five human rights to work, equality and nondiscrimination, privacy, free expression and free association. Despite the human rights harms of hiring algorithms, the AI ethics literature has predominantly focused on abstract ethical principles. This is problematic for two reasons. First, AI principles have been criticized for being vague and not actionable. Second, the use of vague ethical principles to discuss algorithmic risks does not provide any accountability. This lack of accountability creates an algorithmic accountability gap. Closing this gap is crucial because, without accountability, the use of hiring algorithms can lead to discrimination and unequal access to employment opportunities. This paper makes two contributions to the AI ethics literature. First, it frames the ethical risks of hiring algorithms using international human rights law as a universal standard for determining algorithmic accountability. Second, it evaluates four types of algorithmic impact assessments in terms of how effectively they address the five human rights of job applicants implicated in hiring algorithms. It determines which of the assessments can help companies audit their hiring algorithms and close the algorithmic accountability gap.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Similar content being viewed by others

Notes

  1. Mittelstadt (2019) documents more than 60 such initiatives.

  2. Beyond recruitment, companies are also using AI-based HR tools to predict and develop effective corporate wellness initiatives for current employees to drive employee engagement and well-being. See Ajunwa, Crawford, & Ford (2016) for an overview.

  3. Textio claims that its “gender tone meter” identifies overly masculine tones job postings. Its website demo explains how it works using a typical job posting which reads: “We focus on customers, collaboration, and excellence.” The word “excellence” is then flagged as a “fixed mindset phrase which emphasizes raw talent over growth and will hurt inclusion efforts”. Description retrieved from: https://textio.com/products/ on 20 January 2021.

  4. Arya claims to harness data from social media sites and “50 + other relevant professional sources to create meaningful insights and a deeper intelligence of each candidate” (Raub, 2018).

  5. Mya, a “conversational AI” promises to “make hiring more human”. It requests applicants to provide additional qualification information which it independently evaluates. It then automatically schedules qualified applicants for a phone screening interview with a human recruiter. Descriptions retrieved from: https://www.mya.com/meetmya/ on 20 January 2021.

  6. The Cappfinity platform offers the “Koru7 Impact Skills” which tests on “the seven soft skills that every employer is looking for in their best-fit hires: Grit, Rigour, Impact, Teamwork, Curiosity, Ownership and Polish.” After completing a 20- minute assessment, an applicant is given a “fit score” and a “Koru7 impact skills profile” which predict their future job performance. If the scores on “fit” or “generosity” or if the “Koru7 impact skills profile” meet pre-determined standards, then the applicant is invited to a formal interview. Descriptions retrieved from: https://www.cappfinity.com/koru/ on 20 January 2021.

  7. Pymetrics claims to collect “objective behavioral data that measures a job seeker's true potential.” It provides 12 interactive “gamified assessments” which purport to measure attention, effort, fairness, decision making, emotion, focus, generosity, learning and risk tolerance. It then develops tailored interview questions based on these assessments. Descriptions retrieved from: https://www.pymetrics.ai/science on 20 January 2021.

  8. In January 2021, as a result of a public backlash, HireVue announced that it would cease using facial analysis. See, for example, https://www.wired.com/story/job-screening-service-halts-facial-analysis-applicants/

  9. Fama promises to “identify problematic behavior before it becomes an issue” by providing “background checks for the twenty-first century”. It flags instances of misogyny, bigotry, racism, violence and criminal behavior based on an applicant’s social media activities. Descriptions retrieved from https://fama.io/product/ on 14 January 2021.

  10. Beqom uses machine learning to “optimize compensation models and incentives plans,” while also claiming to keep compensation fair and close pay gaps. Descriptions retrieved from https://www.beqom.com/artificial-intelligence-driven-compensation on 14 January 2021.

  11. As we discuss in the following section, the speed and scale at which hiring algorithms perform their tasks may give rise to potential harms in some contexts.

  12. Following Burrell (2016), we can distinguish between three forms of opacity in these kinds of cases: (1) Opacity due to technical illiteracy (e.g., hiring managers insufficiently versed in statistics and thus unable to properly interpret outputs of a predictive hiring model); (2) Opacity due to competitive advantage (e.g., companies treating their data and algorithms in proprietary personality assessments as trade secrets thus limiting a third party from assessing the internal, external and construct validity of such assessments); and (3) Opacity due to fundamental representational capacities of some machine learning models (e.g., deep neural networks used in facial analysis extracting high-level representations incomprehensible to human semantic understanding thus undermining the ability of humans to explain why or how a given classification was reached).

  13. Even if hiring algorithms become transparent and intelligible, companies may nevertheless be unwilling to disclose such information to protect their confidential intellectual property rights over such hiring algorithms (Katyal, 2019).

  14. The infamous Amazon talent recruitment case is a classic example of AI bias. In 2015, Amazon created hiring algorithms to scout for software engineers. They used training data based on the resumes of their top software engineers over the past decade (Houser, 2019). Because 80 percent of Amazon’s top software engineers were male, the algorithms were primarily trained with their resumes. Even if the algorithms were explicitly designed not to use gender as a factor, they recognized patterns in the resumes which had “verbs more commonly found on male engineers’ resumes, such as ‘executed’ and ‘captured’” (Dastin, 2018). Similarly, Amazon’s hiring algorithms may have rejected most female applicants because their resumes contained the term ‘women’s’ due to joining a women’s tennis club or studying at an all-women’s college (Tambe et al., 2019). When Amazon discovered these discriminatory results, it stopped using such hiring algorithms immediately.

  15. To illustrate, Hootsuite, the leading social media management platform, claims to “empower employees to share posts across their own social networks” under the heading of “employee advocacy”. However, it should be done in a way that “reduces risk of non-compliant or off-brand posts by providing only approved messages for staff to share.” Retrieved from https://www.hootsuite.com/solutions/employee-advocacy on 15 February 2021.

  16. This is an actual case, reported by Quartz, where the name “Jared” and “playing lacrosse” were identified by a hiring algorithm as top predictors of job success. See: https://qz.com/1427621/companies-are-on-the-hook-if-their-hiring-algorithms-are-biased/

  17. Thanks to Don Dedrick for helpful feedback on an early draft of this manuscript.

References

Download references

Funding

None.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Joshua August Skorburg.

Ethics declarations

Disclosure

The authors report no conflict of interest.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Yam, J., Skorburg, J.A. From human resources to human rights: Impact assessments for hiring algorithms. Ethics Inf Technol 23, 611–623 (2021). https://doi.org/10.1007/s10676-021-09599-7

Download citation

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s10676-021-09599-7

Keywords

Navigation