Next Story
Newszop

AI points finger at the wrong man: Police's $6 billion tech arsenal sparks outrage after facial recognition blunder

Send Push
The New York Police Department (NYPD) is often described as one of the most resource-heavy police forces in the world. With a $6 billion annual budget and more than 48,000 full-time employees, its access to technology rivals the defense capabilities of some nations. According to Futurism, between 2007 and 2020 the department invested more than $2.8 billion in surveillance tools ranging from stingray phone trackers to predictive crime software.

But one of its most controversial investments is its facial recognition system, first introduced in 2011. While touted as a crime-fighting innovation, the system is now drawing public outrage after it mistakenly identified an innocent man as a suspect.

Wrong Match, Wrong Man
As The New York Times reported, the case began when investigators used CCTV footage from a February incident involving public lewdness. The footage was low quality, yet the facial recognition software still produced six potential matches. All were Black men with dreadlocks and facial hair. Among them was Trevis Williams, a Brooklyn father who bore little resemblance to the actual suspect apart from his hairstyle.

Despite clear warnings that the AI’s output was not evidence of guilt, detectives placed Williams in a photo lineup. When the victim picked him out, police treated it as probable cause. On April 21, Williams was arrested and jailed for more than two days.

His pleas of innocence — including pointing out he was 12 miles away at the time of the crime, and a full eight inches taller and 70 pounds heavier than the suspect — fell on deaf ears. “That’s not me, man, I swear to God, that’s not me,” Williams told police, according to The New York Times. The detective’s reply was chilling: “Of course you’re going to say that.”

Lingering Questions
By July, charges against Williams were dropped and the entire investigation was closed. Still, the ordeal left deep scars and renewed questions about the NYPD’s reliance on flawed technology. Critics say this incident highlights the dangers of combining AI-driven identification with outdated practices like photo lineups, which are already prone to human error.

A Pattern of Mistakes
Williams is not the first victim of AI-driven misidentification. Futurism notes that at least three Black men have faced wrongful arrests in Detroit under similar circumstances, sparking calls for stricter guidelines. Legal advocates argue that facial recognition should never be the sole basis for including someone in a police lineup.

Yet in New York, no such safeguards exist. The NYPD has not announced whether it will review its policies following Williams’ wrongful arrest.

The case serves as a cautionary tale of how advanced surveillance tools, when unchecked, can amplify rather than reduce injustice. Instead of enhancing accuracy, facial recognition in this case acted as a shortcut to a wrongful arrest.

Loving Newspoint? Download the app now