Skip to main content

Table 1 Ethical concerns related to algorithmic use based on the ‘map’ created by Mittelstadt et al. (2016)

From: From What to How: An Initial Review of Publicly Available AI Ethics Tools, Methods and Research to Translate Principles into Practices

Ethical concern Explanation 
Inconclusive evidence Algorithmic conclusions are probabilities and therefore not infallible. This can lead to unjustified actions. For example, an algorithm used to assess credit worthiness could be accurate 99% of the time, but this would still mean that one out of a hundred applicants would be denied credit wrongly
Inscrutable evidence A lack of interpretability and transparency can lead to algorithmic systems that are hard to control, monitor, and correct. This is the commonly cited ‘black-box’ issue
Misguided evidence Conclusions can only be as reliable (but also as neutral) as the data they are based on, and this can lead to bias. For example, Dressel and Farid (2018) found that the COMPAS recidivism algorithm commonly used in pretrial, parole, and sentencing decisions in the United States, is no more accurate or fair than predictions made by people with little or no criminal justice expertise
Unfair outcomes An action could be found to be discriminatory if it has a disproportionate impact on one group of people. For instance, Selbst (2017) articulates how the adoption of predictive policing tools is leading to more people of colour being arrested, jailed or physically harmed by police
Transformative effects Algorithmic activities, like profiling, can lead to challenges for autonomy and informational privacy. For example, Polykalas and Prezerakos (2019) examined the level of access required to personal data by more than 1000 apps listed in the ‘most popular’ free and paid for categories on the Google Play Store. They found that free apps requested significantly more data than paid-for apps, suggested that the business model of these ‘free’ apps is the exploitation of the personal data
Traceability It is hard to assign responsibility to algorithmic harms and this can lead to issues with moral responsibility. For example, it may be unclear who (or indeed what) is responsible for autonomous car fatalities. An in depth ethical analysis of this specific issue is provided by Hevelke and Nida-Rümelin (2015)