1 Introduction

The term ‘recognition’ is used in a different sense in philosophy than it is in the field of computer science. In computer science, ‘facial recognition’ refers either to the identification of individuals on the basis of images of their face (i.e., the recognition of objective identity) or to categorization based on people’s facial features (i.e., the recognition of traits such as age, gender, race, or emotions, which can, but do not have to, constitute one’s subjective sense of identity). Within philosophy—in particular in the work of Hegel, critical theorists and feminists—‘recognition’ refers to the regard for an individual’s identity or aspects thereof. Hence, in a philosophical context, recognition is not the same as identification or categorization, but has a distinct meaning (as also pointed out by [15]). According to recognition scholars like Taylor [21] and Honneth [13], receiving recognition is of indispensable importance to people’s self-development and identity formation and, moreover, they argue that a just society is one in which everyone receives due recognition [22]. Hence, recognition has normative relevance.

In this paper, I discuss three ways in which facial recognition technology can fail to recognize individuals in the philosophical, normative sense of the word. In two of these three cases, the misrecognition that takes place is the result of a failure of the technology, namely misidentification or miscategorization. The third form of misrecognition has to do with the limitations of facial recognition and the information it can derive from the image of a face. The distinction between objective and subjective identity is important here. ‘Objective identity’ refers to our formal identity, such as our official name, or a citizen or customer number with which we are known by institutions. Alternatively, objective identity can also include the demographics or categories we objectively belong to (e.g., White, male, elderly, student). These are traits that a person can, but does not have to identify with. ‘Subjective identity’, then, refers to the characteristics that we do identify with; the traits that constitute our sense of self. Our subjective identity is our own understanding of who we are and how we would present ourselves to others.

Following the work of Taylor [21] and Honneth [13] on the topic of recognition, I argue that facial recognition’s potential to misrecognize individuals should be considered alongside other ethical concerns raised by facial recognition and other emerging technologies. Analyzing the ways in which artificial intelligence (AI) applications like facial recognition can misrecognize people in a normative sense is a valuable new approach in the ethics of AI, that can shed light on ethical issues that are not yet fully addressed in the debate on facial recognition and other emerging AI applications (e.g., [4, 6, 20]).

The structure of this paper is as follows. In Sect. 2, the next section, I explain what facial recognition technology is and offer some examples of the multitude of ways in which it is used. In Sect. 3, I introduce the philosophical concept of recognition, primarily on the basis of Charles Taylor’s The Politics of Recognition (1992) and Axel Honneth’s The Struggle for Recognition (1996). In Sect. 4, I analyze three ways in which facial recognition systems misrecognize individuals in a normative sense. In Sect. 5, I discuss what it means to be recognized by a facial recognition system rather than a fellow human being, and I end the chapter with a brief conclusion in Sect. 6.

2 Facial recognition technology

Facial recognition systems are biometric identification and categorization AI tools that, like similar systems, are aimed at connecting “identity to the body” ([10], 14).Footnote 1 The facial recognition systems that are aimed at biometric identification recognize individuals’ objective identity on the basis of their facial features. This type of recognition requires the availability of a database to which images or videos of faces can be compared. For example: law enforcement agencies can use facial recognition to identify a suspect in a video, only if they have access to a database of which their face is a part, like a database of previous criminal offenders. Biometric categorization systems perform facial analysis; they deduct people’s demographic information (e.g., age, gender, ethnicity) or inner states (e.g., emotions, intentions) from their facial features and expressions. These systems infer someone’s objective identity, to the extent that it categorizes people into demographic groups that they objectively belong to. In contrast to biometric identification, categorization does not require the person in question to be known to the system. On the basis of large amounts of data from other smiling individuals, a facial recognition system learns to recognize an unknown individual as smiling.

To complicate the matter, there are different possible modes of facial recognition: near or remote recognition, and real-time or post recognition. In the concept AI Act brought out by the European Commission in 2021, remote biometric identification is defined as “an AI system intended for the identification of natural persons at a distance through the comparison of a person’s biometric data with the biometric data contained in a reference database” (European Commission [8], p 19). So remote facial recognition can entail identifying someone in a shopping center by analyzing the video footage of the CCTV cameras in the shopping center. Near recognition is undefined in the AI Act, but one can assume it includes something like smart glasses or a phone application that enables one person to identify another person in the same physical space. Real-time and post recognition, furthermore, have to do with the time at which a person is recognized by a biometric system. Real-time recognition happens live, while post recognition happens in hindsight, for example by applying facial recognition techniques to stored video footage. These distinctions are relevant, because different modes of facial recognition are treated differently by policy-makers. The EU AI Act proposes to ban remote, real-time biometric identification in public places.Footnote 2 Which means that facial recognition systems that are used nearby, after the fact, or in private spaces would still be allowed. Moreover, the ban is restricted to the use of facial recognition for identification, while facial recognition is used for categorization as well.

Because it facilitates both identification and categorization, facial recognition has many possible applications. A few examples: facial recognition can serve as an efficient identity management tool to law enforcement and businesses; it enables so-called ‘smile-to-pay’ services, where identification on the basis of one’s face suffices to make a transaction; and it can facilitate the personalization of services like advertisements, customer service or restaurant menus, or even personalize products such as games [5]. Hence, facial recognition comes with a lot of benefits—not only to businesses and governments, but also to individual users. Facial recognition brings the individual a lot of convenience, for instance by personalizing products, enabling quicker transactions, and by securing apps and devices without the need for a key, card, or password.

However, facial recognition applications also give rise to a number of ethical problems, including (but not limited to) violations of the right to privacy and the principle of non-discrimination. Facial recognition can violate one’s privacy, first of all, because it is expected to become omnipresent and able to infer a wide variety of information about a person. The technology has raised concerns regarding discrimination, secondly, as many facial recognition systems have been found to contain algorithmic biases towards certain demographics. Facial recognition’s impact on privacy and discrimination has received substantial attention, and they are important issues indeed, but they do not fully cover all the possible problems that facial recognition technology can give rise to. Facial recognition applications, including discriminatory or invasive ones, can have harmful effects on a person’s psychological development. More precisely, when a facial recognition system fails to adequately recognize a person’s identity or traits, and especially when it does so structurally, it can hamper the person’s development of self-respect and self-esteem. In what follows I therefore argue that (mis)recognition too is an important ethical issue in the context of facial recognition technology. To do so, I first briefly elaborate on the concept of recognition and its ethical relevance.

3 The politics of recognition

The topic of recognition, as discussed within contemporary social and political philosophy, goes back to Georg Wilhelm Friedrich Hegel (1770–1831). In his Elements of the Philosophy of Right, Hegel distinguished three different types of relations of recognition, related to three societal spheres, namely: the family, the law, and the social [11]. Hegel argues that each relation of recognition is needed to develop as an autonomous individual. His discussion of recognition has inspired philosophers like Axel Honneth and Charles Taylor, who played an important part in reviving the debate about recognition and justice in the 1990s, after which it has been continued by feminist scholars such as Nancy Fraser and Judith Butler, among others.

Both Taylor and Honneth adopted Hegel’s distinction between three spheres of recognition. Taylor [21], to start with, argues that recognition can take place in the intimate sphere or in the public sphere. Within the public sphere, he then distinguishes between politics of universalism, “emphasizing equal dignity of all citizens”, and the politics of difference, which refers to the recognition of “the unique identity of this individual or group, their distinctness from everyone else” ([21], 37–38). The intimate sphere, which Taylor leaves out of the discussion in his 1992 essay The Politics of Recognition, resembles Hegel’s sphere of the family. The politics of universalism in the public sphere is akin to Hegel’s sphere of the law. As Taylor writes: “the content of this politics has been the equalization of rights” ([21], p 37). In the twentieth century, the politics of universalism have found expression in the civil rights movement and feminist movements. The politics of difference, related to Hegel’s idea of the social, followed later. While early feminists and the civil rights movement were preoccupied with guaranteeing that everyone is recognized and treated as being equal by the law (i.e., that the law is blind to difference), more recent political movements fight for the recognition of differences between individuals or groups. The politics of difference stresses the importance of differential treatment, to establish and secure social freedom. An example of differential treatment that functions as recognition in the social sphere, is giving exclusive land rights to indigenous people. Another example would be to introduce hiring quota to promote diversity in certain professions, or to make ‘non-binary’ a gender option on official documents for those who identify as neither male nor female. Taylor’s essay is valuable in that it explains how a theory of recognition can help to understand the significance of identity politics.

According to Honneth [13], relations of recognition can be based on love, rights or solidarity. Recognition on the basis of love, first of all, implies acknowledging and valuing someone’s needs and feelings. Hence, this mode of recognition refers not to romantic love, but to strong emotional bonds or relations of care. Honneth explains recognition on the basis of love primarily through the example of the bond between a parent and a child and with the assistance of psychoanalysis. He argues that the recognition of one’s needs and feelings is necessary to develop self-confidence. Recognition on the basis of rights, secondly, refers to the acknowledgment of another as an autonomous individual and bearer of rights and duties; it resembles Hegel’s sphere of the law and Taylor’s politics of universalism. Recognition of rights is also referred to as ‘respect’ and receiving this type of recognition enables a person to develop self-respect and become an autonomous individual, capable of making independent choices and taking responsibility for their actions. Recognition on the basis of solidarity, finally, resembles Hegel’s sphere of the social and Taylor’s politics of difference. Being recognized on the basis of solidarity namely entails obtaining acknowledgment for one’s achievements or societal contributions—it is the recognition of specific aspects of a person’s identity. Recognition on the basis of solidarity, most often referred to as ‘esteem’, is necessary for individuals to develop self-esteem.

Following Hegel, Honneth argues that people need each of these three forms of recognition to develop a practical relation to oneself. This means, having a sense of self-respect, self-esteem and self-confidence that in turn allow a person to flourish, achieve their goals, and build meaningful and strong social relations. In Honneth’s own words: recognition “permit[s] the addressee to identify with his or her own qualities and thus to achieve a greater degree of autonomy” ([14], p 330). Taylor too claims that recognition shapes a person’s identity. Adequate recognition helps a person to develop into their authentic, autonomous selves. Therefore, Taylor defends, recognition is a vital human need. According to both philosophers, due relations of recognition are a necessary condition for a just society and the lack of adequate recognition, or misrecognition, is problematic because it can harm a person’s self-development. As Taylor explains: “(…) a person or group of people can suffer real damage, real distortion, if the people or society around them mirror back to them a confining or demeaning or contemptible picture of themselves” ([21], p 25). One can assume that this damage is most severe when the misrecognition is not a single event, but something that is experienced structurally. Since recognition is considered to be constitutive of a person’s ability to develop as an authentic and autonomous being and a condition for a just society, misrecognition should be seen as a threat to autonomy and a violation of justice. Alternatively, misrecognition could be perceived as a threat to well-being, since it hampers a person’s flourishing. Hence, misrecognition is an ethical concern, because the psychological implications of misrecognition touch upon fundamental moral values and principles.

Finally, those who experience misrecognition are said to struggle for recognition. The struggle for recognition is a struggle for the affirmation of one’s identity. Important to point out though is that not every misrepresentation of a person’s identity is necessarily harmful or counts as an instance of misrecognition. For instance, if my name is misspelled in my passport, I am not misrecognized with respect to rights, solidarity or love.

4 Misrecognition by facial recognition technology

The literature on (mis)recognition can help us to understand some potential ethical implications of facial recognition technology that are otherwise likely to be overlooked. In this section, I analyze three possible types of misrecognition by facial recognition systems. The first two (discussed in Sects.  4.1 and  4.2) resemble instances of technological error, namely misidentification and miscategorization. The third type of misrecognition Sect. (4.3) has to do with the fact that the technology is limited when it comes to recognizing certain, more subjective aspects of our identity. Even when facial recognition applications function as they should or promise to, they can fail to categorize people in accordance with their subjective sense of identity.

4.1 Misrecognition as misidentification

A first, much discussed problem of facial recognition systems used for biometric identification, is the fact that many of these systems consistently misidentify certain demographics. More precisely, Black, female and young (18–30) individuals were found to be misidentified with significantly lower accuracy than other demographics. Klare et al. [16], for instance, reported that “commercial and the nontrainable algorithms consistently have lower matching accuracies on the same cohorts (…) within their demographics”. To put it differently, facial recognition systems are often biased. They perform better for certain societal groups. The main cause of these biases is a lack of diversity in training data sets. The result of biases in facial recognition can be false positives and negatives in its outcomes, which in turn lead to demeaning experiences and discriminatory treatment of those who the particular system failed to identify accurately.

Take the following example of a false positive: a facial recognition systems identifies person X as suspect Y in a video, while person X had nothing to do with the crime. This common example has occurred on several occasions in the US, where (biased) facial recognition systems were used by law enforcement [12]. When facial recognition is used by law enforcement to identify suspects in images or security footage, but the system is not as accurate when identifying darker-skinned people as it is when identifying their fellow citizens with a lighter skin tone, this implies that darker-skinned citizens are at a higher risk of being falsely identified as a suspect. Being falsely identified and treated as a suspect, because of the color of your skin, is a form of discrimination.

A false negative occurs, for instance, when a facial recognition system that is meant to unlock devices such as smartphones, misidentifies the owner of the device and therefore denies them access. This form of misidentification not only causes the device’s owner to experience unnecessary inconvenience; when certain demographics consistently struggle to get access to their devices, because the facial recognition system that facilitates access is biased, it can become demeaning.

These examples show that misidentification is an ethical issue, because it can lead to discrimination or violate human dignity. However, such misidentification has another type of ethical implication that is worth paying attention to. Biased systems misrecognize darker-skinned, female and young people as equal members of the society in which the systems are developed or used. When facial recognition systems are introduced and used despite the fact that they are unable to identify all demographics with the same level of accuracy, whether it is in law enforcement or to unlock a personal device, it seems as if the misidentified demographics are less important than the groups that are accurately recognized. Darker-skinned, female or young people are not treated as being of equal worth, when only they have to suffer discriminatory or demeaning treatment as a result of the society-wide adoption of flawed facial recognition technology. By not recognizing specific demographics as being equally important, biased facial recognition systems resemble misrecognition on the basis of rights [13] and are in conflict with the politics of universalism [21]. Facial recognition should not function adequately only for some, when all are confronted with the technology, willingly and unwillingly, on a regular basis. This misrecognition on the basis of rights is ethically problematic not just as a violation of moral principles regarding equal treatment, but also because it can have long-lasting damaging effects on a person’s self-development, in particular on their sense of self-respect (following [13]).

4.2 Misrecognition as miscategorization

Categorization systems are used to analyse faces to determine to what demographic a person belongs or what their inner state or behaviour is in a given moment. Most common are systems that categorize people along gender, race, and age or analyse people’s emotions or sentiments. However, there have also been attempts to make inferences about people’s religious beliefs, sexual orientation, or other characteristics, on the basis of facial features and expressions [3].

Categorization systems can fail to accurately recognize people’s traits due to technical error or because there are limitations to what a face can really reveal about a person. Technical error, first of all, occurs for instance when systems contain algorithmic biases. Just like identification systems, categorization systems have been found to perform with a significantly lower accuracy when analyzing videos or images of darker-skinned and female individuals [7]. As a result, darker-skinned people and women more often have to deal with facial analysis systems that categorize them wrongly, or fail to categorize them at all. A notorious example of failed categorization are the several cases where a categorization system labeled images of darker-skinned people as ‘gorillas’, ‘apes’ or ‘primates’.Footnote 3 Another known example of miscategorization are facial recognition systems that constantly identify Asian people as ‘blinking’. These are vivid examples of how miscategorization is not just an inconvenient or innocent error, but truly demeaning and potentially damaging for a person’s self-esteem.

Secondly, miscategorization can occur when facial analysis systems are meant to derive traits that might not be derivable from the image of a face in the first place. For example, emotion recognition systems (a sub-category of facial analysis) are grounded on the assumption that all individuals, regardless of their cultural background or upbringing, show basic emotions with the same facial expressions. But some psychologists have expressed strong doubts that emotions are natural kinds and thus expressed in the same manner by all individuals [1, 2]. In other words, if it is true that emotions are not expressed in the same way by all people, emotion recognition systems are not able to do what they claim to do.

Furthermore, there have been various attempts to develop facial analysis systems that can tell someone’s sexual orientation or the likelihood that they commit a crime in the future. These systems rest on unscientific assumptions, just like emotion recognition. The attempt to derive issues like sexuality or criminality from a person’s facial features is namely grounded on the idea that a person’s whole identity, including their personality, can be read from their face. This assumption resembles physiognomy, a pseudo-science practiced in the West until deep into the nineteenth century [10]. Physiognomy’s aim is to determine a person’s personality on the basis of their facial appearance—a practice which has lead to the discriminatory treatment of certain societal groups. The history of physiognomy should have taught us to be careful when making assumptions about people’s identity or characteristics on the basis of their appearance, but in today’s development of facial analysis applications we see a continuing belief that faces are “carriers of signs that reveal the essential qualities of their bearers” ([10], p 21).

Just like in the case of misidentification, miscategorization can give rise to a struggle for recognition. Miscategorization implies that people have elements of their identity misunderstood or ignored, or worse, that they are treated wrongly because of certain elements of their identity. In other words, when miscategorization occurs, specific elements of a person’s identity are misrecognized. Therefore, miscategorization relates to the politics of difference [21]. To the extent that false categorization concerns specific aspects of identity, rather than identity as a whole, miscategorization can also be understood as misrecognition on the basis of solidarity in the Honnethian sense. However, it should be noted that Honneth understood solidarity mainly as the recognition of people’s societal contributions. Honneth considered the recognition of an individual’s unique qualities to be a form of esteem. He argued that without receiving esteem from others, individuals cannot develop the level of self-esteem they need to be autonomous and flourish as authentic individuals. Hence, misrecognition through miscategorization is an ethical issue because of its negative implications for people’s development of self-esteem.

As is the case for misidentification, the negative implications of miscategorization are likely to be most stringent when miscategorization is experienced systematically, but might nevertheless result from a single instance of misrecognition. Consider the aforementioned, infamous example of a facial recognition system that labels an image of an African-American user as ‘gorilla’. Such miscategorization does not need to happen repeatedly for it to harm the user’s self-esteem. This single instance of misrecognition can impact the user’s future interactions with technology (for example when they upload a picture of themselves on a social media platform that uses facial recognition) as well as their idea of how they are perceived by the outside world.

4.3 Misrecognition due to the inability to infer subjective identity

Misrecognition through misidentification or miscategorization, which I discussed in the subsection above, occurs when a facial recognition system does not function as it was intended to. However, even when facial recognition systems are well-functioning, unbiased and scientifically sound, they can still misrecognize those subjected to analysis—as I explain in this third subsection.

Facial recognition systems reduce people’s identity to certain categories and to the personal information that their face can reveal. But limiting identity to the face and to predetermined categories implies leaving out certain aspects of people’s identities and excluding those who do not fit the regular boxes. For instance, binary gender recognition systems—that is, systems that categorize individuals as either ‘male’ or ‘female’—fail to appropriately recognize anyone identifying as non-binary. Similarly, racial recognition systems will have difficulties categorizing mixed-race individuals in line with what race or ethnicity they identify most with. In these cases, important aspects of people’s subjective identity are not recognized by the respective categorization system, even when the system functioned in accordance with the norms by which it was programmed.

Furthermore, when a categorization system applies categories or ‘boxes’ that exclude the subjective identity of certain groups in society, this can imply that members of those groups constantly have to face the stereotypes they try to avoid. In such a case, the system amplifies a pre-existing struggle for recognition. Take the aforementioned example of a gender recognition system that uses binary gender categories. Such a system can be used, amongst others, to personalize advertisements in stores or online. Now imagine: a person that identifies as neither male nor female walks into a clothing store and the store’s categorization system identifies them as female. Therefore the store’s personalization system shows the customer that dresses are on discount this week. As a result, the person is confronted with a stereotype they actively try to avoid. Admittedly, this might not be the only problematic stereotype in this scenario, as most clothing stores are still divided in a section for men and one for women. But the point to be made here is that categorization systems can contribute to (harmful) stereotypes. Moreover, by automatically inferring objective identities and characteristics, facial recognition systems deprive people of the opportunity to communicate their subjective sense of identity to others themselves. If the store in this example would not depend on facial recognition to help their customers or to share information about discounts, the non-binary customer would have had the chance to say that they do not identify as male or female, and thus look for clothes that match or express their particular identity.

So, in short, facial recognition is limited when it comes to recognizing people’s subjective identity, as the technology can only recognize objective forms of identity and those aspects that can be read of a person’s face. Facial recognition is often promoted and used as a tool for personalization, but given its limitations when it comes to recognizing a person’s true individuality or individual uniqueness, we should question the extent to which facial recognition can really personalize services.

Honneth’s work on the struggle for recognition and Taylor’s division between two forms of identity politics, help us to understand why facial recognition’s reduction of people’s identity to certain (objective) traits or predetermined categories could be an ethical problem. Facial recognition’s inability to recognize a person’s subjective identity misrecognizes people’s uniqueness, i.e., that which differentiates them from other people. Hence, by failing to recognize people’s unique individuality, facial recognition—categorization systems in particular—reinforces people’s struggle for social esteem [13] and a politics of difference [21]. By contributing to the struggle for recognition of certain societal groups or individuals, facial recognition threatens their identity formation. More precisely, if we follow Honneth’s theory, the misrecognition of an individual’s uniqueness potentially harms their development of self-esteem.

5 Can we get (mis)recognized by technology?

Thus far I have argued that facial recognition technology can misrecognize individuals on a normative level, in (at least) three ways. I argued that this is an ethical concern, as it can have a negative effect on people’s development of self-respect and self-esteem. In doing so I followed Honneth [13] and Taylor [21]. However, recognition—in the way it is discussed by Honneth, Taylor, and other social philosophers—is an interpersonal phenomena. The question that therefore remains to be addressed here is: can facial recognition technology (mis)recognize us in the same way as fellow human beings can?

Laitinen [18] describes a distinction between two ways in which recognition is delineated in the literature on the topic. The first account is what Laitinen calls ‘mutuality insight’. Under this view, recognition presupposes that the person that is being recognized (B) is aware of the recognition of A and also recognizes A as being capable of recognizing. The second account is called ‘adequate regard’ and implies merely and attitude of A towards B, not the other way around. We could say that, under this second account, B could be affected by A’s (lack of) recognition without being aware of that (mis)recognition. While in the case of the mutuality insight, B needs to be conscious of A’s recognition.

Left out of this discussion is from what sort of agent or entity we can receive recognition. However, rather than getting stuck in a debate about whether or not AI has the agency or intentionality that would be required to say it can recognize person’s in the same way humans can, I propose a more pragmatic approach to the matter. If we follow the adequate regard account, first of all, it does not matter if the person considers the technology to be capable of recognizing them. All that matters under this understanding of recognition is the effect the system has on the person’s self-development. If what appears to be misrecognition on the part of the facial recognition system indeed has the same sort of constitutive or harmful effects on a person’s sense of self-worth, then it makes sense to say that facial recognition technology (mis)recognizes the people whose faces it analyzes. To determine whether facial recognition can misrecognize people under this second account of recognition, it would have to be investigated empirically if inadequate recognition by facial recognition or other technologies has the same sort of harmful impact on a person’s self-development as misrecognition by human beings does.

Secondly, following the mutuality insight account of recognition, we could say that there is a relevant relation of recognition between a facial recognition system and a person, if the person in question recognizes the facial recognition system as an entity capable of recognizing them on the level of rights, solidarity or even love (which I did not discuss in relation to facial recognition). Moreover, if an individual perceives facial recognition, or any technology for that matter, as an entity capable of giving them respect or esteem, the failure of this technology to do so can also harm their self-development. Whether a person perceives a technology as such might be culturally dependent. As different studies have pointed out, whether facial recognition is embraced, accepted, or seen as a threat differs across cultures (e.g., [17]). In a similar vein, it might be the case that the extent to which facial recognition is seen as an entity capable of recognizing a person’s identity in a normative sense, depends on context and culture as well. However, while said studies suggest that facial recognition is more easily accepted in Asian societies, because it is much less perceived as a threat to privacy or other values, Asian societies are also said to personify AI applications more than people do in the West [9]. The tendency to personify AI suggests that it is in fact more likely that people will experience facial recognition as misrecognizing them and thus potentially suffer damages to their development of self-worth.

6 Conclusion

When a facial recognition system fails to identify or categorize you accurately, you are misrecognized in the most literal sense of the word. But, as I showed in this paper, there is a deeper layer to this kind of misrecognition. Facial recognition systems can misrecognize people in a normative sense. Algorithmic biases towards certain demographics fail to recognize the people belonging to those demographics as being equally important citizens. Wrong categorizations, whether they are caused by biases or by the limits of facial recognition technology, neglect what makes a person unique and different from others. That is why facial recognition technology can amplify existing struggles for recognition or even give rise to new ones. This is not just a political or societal issue, but an ethical one too. Following Honneth and Taylor, I have argued that (mis)recognition by facial recognition can affect people’s development of self-esteem and self-respect, just like recognition from fellow human beings does. Therefore, I conclude that technology developers and policy makers should consider the ways in which facial recognition and other AI applications grant individuals and groups adequate recognition alongside other, more commonly raised ethical concerns such as privacy and non-discrimination. To do this, it is necessary to critically reflect upon the assumptions about identity that underly a technology and test more thoroughly whether a technology functions appropriately for all users of a system or members of the society in which it is implemented. Misrecognition may not be an issue so severe that the development or use of facial recognition should be limited, but it should be taken serious given its potential to leave long lasting hardships.