Black face matters: why neural networks are accused of discrimination

In the United States, another scandal broke out amid discrimination: in Detroit, the facial recognition system mistakenly identified a black man as a criminal. Robert Williams, an African American, was arrested because his photo from the store’s security cameras was considered identical to the photo of the robber, also an African American, by the algorithm.

The fact is that since July last year in Detroit, the police, investigating murders, attacks and robberies, relies primarily on facial recognition systems. And only then does he check the testimony of witnesses.

This case is very revealing and far from the only one.

What happened and why it matters

After the store was robbed in October 2018, police officers studied surveillance video. They enlarged the attacker’s face and uploaded it to the facial recognition database. The system determined that the video was Robert Williams. His photo, among others, was shown to the security guard of the store, although he was not around at the time of the robbery. He identified Williams and the man was arrested.

Later, the police themselves compared his face with the criminal on the tape and admitted that the system was mistaken. Williams said he was held at the station for 30 hours, and the case against him was closed only two weeks later.

Against the backdrop of the Black Lives Matters protest movement, the story caused a real scandal. The photo and biometric data of the man are now in the police database, which means that his photo may again surface at the next inquiry. The ACLU, the American Civil Liberties Union, has filed a formal complaint against the Detroit police and is demanding that Williams be removed from the database.

What’s wrong with facial recognition in forensics

Artificial intelligence experts insist: recognition systems are not yet reliable enough to serve as a tool for the police. They are often mistaken when they recognize women and older people, blacks or Asians. This is also because they are trained on datasets dominated by white males.

A recent example: neural networks proposed to restore a photo of Barack Obama, compressed to the minimum resolution. The result is a photo of a white man. The developers of the algorithm – the Pulse team – have already promised that they will correct it.

At the same time, AI is used not only to compare individuals with the police base. Researchers from the University of Harrisburg USA recently presented developments in the field of so-called predictive forensics: when AI is used to prevent crimes. Neural networks identify potential criminals by face, based on criminal statistics and big data. The creators claim that the recognition accuracy is 80%.

In response, more than 1,7 thousand scientists and experts demanded that the results of the project be removed from publication. They believe that such an approach, firstly, borders on physiognomy – that is, pseudoscience. But most importantly, algorithms are many times more likely to attribute blacks to criminals than whites. All this only reinforces interracial conflicts and prejudice. The problem is that, according to statistics, African Americans in the United States do indeed commit crimes ten times more often than whites.

In 2016, similar experiments were carried out in Shanghai. However, researchers from Google and Princeton questioned its objectivity: they noted that in all the photos that the system attributed to criminals, people are gloomy and not smiling, while in others it is the other way around.

Why IT giants refuse such projects

In the wake of the latest discrimination scandals in the US, IBM, Microsoft and Amazon have already announced that they will suspend facial recognition projects for police and intelligence agencies. This happened after the news that these technologies are set against the protesters.

IBM said it refuses to work with any technology used “for mass surveillance and violation of basic human rights and freedoms.” IBM CEO Arvind Krishna wrote about this in his letter to members of the US Congress. The company will no longer provide APIs for facial recognition software developers.

In 2018, a study by the National Institute of Standards and Technology was published. The authors argued that IBM’s facial recognition technologies discriminated against black women. The company has now called for a nationwide dialogue on the issue. Krishna also promised that IBM would take action against the police, such as maintaining a federal registry that would list all police officers who engage in brutality.

Microsoft also refused to sell its developments to the authorities.

Amazon also suspended cooperation with the police in the field of facial recognition. True, so far we are talking about an annual moratorium. Their Rekognition facial recognition technology has been used throughout the country since 2016. Now Amazon is demanding that the authorities impose severe restrictions on the use of such technologies by police officers.

Recall that in California facial recognition was banned for use by the police and intelligence agencies last year. Now such a ban is being actively discussed in Boston and Philadelphia.

How are things in our country?

In our country, the opposite is true so far: recognition systems are actively used both in the subway and on the streets. The scale is only growing every year.

In June 2020, it became known that a system with the Orwell face recognition function would be launched in Russian schools. The project will be entrusted to the National Informatization Center as part of Rostec and will allocate more than ₽2 billion for it. The system will help track when children come to school and leave it, as well as all unauthorized persons on and near the school.

As part of the project, about 30 thousand surveillance cameras will be connected to the Orwell platform with computer vision capabilities. It also integrates a face recognition module from NtechLab, one of the largest developers in the industry, the main contractor for the Moscow authorities.

The same NtechLab recently took third place in the international competition Deepfake Detection Challenge for recognizing deepfakes on video. The competition, hosted by Amazon, Facebook and Microsoft, was attended by more than 2,2 thousand teams from around the world. The main prize, by the way, is $1 million. The source code of the NtechLab algorithm, as promised, was posted in the public domain in order to contribute to the fight against deepfakes. The company estimates the accuracy of its facial recognition algorithms at more than 90%.


Subscribe to the Trends Telegram channel and stay up to date with current trends and forecasts about the future of technology, economics, education and innovation.

Leave a Reply