Amid reckoning on police racism, algorithm bias in focus

0 279

Get real time updates directly on you device, subscribe now.

WASHINGTON (AFP) – A wave of protests over law enforcement abuses has highlighted concerns over artificial intelligence programs like facial recognition which critics say may reinforce racial bias.

While the protests have focused on police misconduct, activists point out flaws that may lead to unfair applications of technologies for law enforcement, including facial recognition, predictive policing and “risk assessment” algorithms.

The issue came to the forefront recently with the wrongful arrest in Detroit of an African American man based on a flawed algorithm which identified him as a robbery suspect.

Critics of facial recognition use in law enforcement say the case underscores the pervasive impact of a flawed technology.

Mutale Nkonde, an AI researcher, said that even though the idea of bias and algorithms has been debated for years, the latest case and other incidents have driven home the message.

“What is different in this moment is we have explainability and people are really beginning to realize the way these algorithms are used for decision-making,” said Nkonde, a fellow at Stanford University’s Digital Society Lab and the Berkman-Klein Center at Harvard.

Amazon, IBM and Microsoft have said they would not sell facial recognition technology to law enforcement without rules to protect against unfair use. But many other vendors offer a range of technologies.

Secret algorithms

Nkonde said the technologies are only as good as the data they rely on.

“We know the criminal justice system is biased, so any model you create is going to have ‘dirty data,’” she said.

Daniel Castro of the Information Technology & Innovation Foundation, a Washington think tank, said however it would be counterproductive to ban a technology which automates investigative tasks and enables police to be more productive.

“There are (facial recognition) systems that are accurate, so we need to have more testing and transparency,” Castro said.

“Everyone is concerned about false identification, but that can happen whether it’s a person or a computer.”

Seda Gurses, a researcher at the Netherlands-based Delft University of Technology, said one problem with analyzing the systems is that they use proprietary, secret algorithms, sometimes from multiple vendors.

“This makes it very difficult to identify under what conditions the dataset was collected, what qualities these images had, how the algorithm was trained,” Gurses said.

Get real time updates directly on you device, subscribe now.

Leave A Reply

Your email address will not be published.

This website uses cookies to improve your experience. We'll assume you're ok with this, but you can opt-out if you wish. Accept Read More