Facial recognition law enforcement

Facial recognition discriminates, but who is to blame?

3 minutes to read
Article
Tinka Krikke
06/01/2021

With just one look at our phone, it knows exactly who we are. Facial recognition has become a ubiquitous feature of our digitalized world. Social media uses it to identify people, companies use it to target advertisements, and law enforcement uses it to track down suspects. However, facial recognition has been found to be less accurate for certain demographics. It seems to have the poorest accuracy in identifying Black people, meaning that this group is the most likely to be mixed up with other faces or not recognized at all (Najibi, 2020). Artificial intelligence technology's unequal performance has serious consequences and raises serious questions about discrimination. 

Facial recognization's bias

Facial recognition technology has the ability to recognize people based on their photos. U.S. law enforcement has used this to track down suspects they already have a picture of, for example a mugshot (Najibi, 2020). However, this disadvantages people that facial recognition has the most trouble with. The result of this is that Black people are more likely to be incorrectly matched to a mugshot than other demographic groups.

This reinforces inequality in policing in a system where Black people are already disproportionally targeted. Many say that this technology feels morally wrong or discriminatory. If it is, who, exactly, is discriminating? Are the engineers, the company that produces the technology, or law enforcement to blame? This paper analyzes the responsibility of discrimination in cases of artificial intelligence, based on philosophical debates on causal and moral responsibility. 

Causal and moral responsibility  

I will differentiate between two kinds of responsibility, namely causal and moral responsibility. A person is causally responsible if they are part of the causal chain that led to a certain situation. Since the engineers created the facial recognition, they are causally responsible. The same counts for all other parties, as they are all part of the causal chain that leads to discrimination. However, they do not necessarily have full control over these causes. This points us toward the notion of moral responsibility, which is much harder to detect. For someone to be morally responsible, they need some control over the actions that led to discrimination.

Facial recognition has become an almost-autonomous system that adapts to new patterns based on input

This is where the case of the facial recognition and other artificial intelligence becomes more complicated. Machine learning technologies have become so complex that the human mind can barely keep up. Facial recognition has become an almost-autonomous system that adapts to new patterns based on input. As a result, even the engineers have no idea what exactly is going on within the machine. Facial recognition producers have not yet been able to fully fix algorithms that lead to inequal exactitude in black and white faces.

This means that none of the mentioned parties can really control this self-learning machine and therefore be morally responsible. A situation arises where no one can fully be held morally responsible for algorithmic discrimination - a responsibility gap. Martin (2018) states that, "algorithms are so complicated and difficult to explain — even called unpredictable and inscrutable — that assigning responsibility to the developer or the user is deemed inefficient and even impossible." However, almost everyone still uses facial recognition. Does not make that everyone responsible? Matthias (2004) proposes that the only realistic solution to this problem is to acknowledge the existence of this responsibility gap and to find a way to address it. 

In contrast, Martin (2018) rejects the idea that algorithms are neutral and that their consequences cannot be blamed on technology companies. The algorithms within a facial recognition system are part of a larger decision, meaning that the manufacturers make a moral choice that affects "the delegation of who-does-what between algorithms and individuals within the decision" (Martin, 2018). She therefore argues that the company that produces the facial recognition system is responsible for discrimination, as they deliberately made the decisions that led to the algorithm. With this conclusion, she connects moral and causal responsibility.

Addressing the responsibility gap  

I have analyzed discriminatory facial recognition technology through the lens of the philosophical debate on responsibility, focusing on the responsibility gap. Facial recognition technology's scope of complexity often goes beyond human comprehension. Many see facial recognition technology as an autonomous and self-teaching system, leading to a perceived lack of control by designers and manufacturers. Some say that the technology producers are in fact responsible, as they make the decisions that lead to the end-product. It is still unclear who can be held morally responsible for discrimination caused by this technology, also known as a responsibility gap. Further research is needed to find a way to address these issues and keep up with a digitalizing world. Either way, we can no longer ignore that evolving artificial intelligence, including facial recognition, challenges the social structure of responsibility. 

References 

Martin, K. (2019) Ethical Implications and Accountability of AlgorithmsJournal of Business Ethics, 160, 835–850.

Matthias, A. (2014) The responsibility gap: Ascribing responsibility for the actions of learning automata. Ethics and Information Technology, 6, 175–183.

Najibi, A. (2020, October 26). Racial Discrimination in Face Recognition Technology. Science in the News.

Simonite, T. (2019, July 22). The Best Algorithms Still Struggle to Recognize Black Faces. Wired.