See more of the story

The growing popularity of facial recognition tools among local law enforcement in Minnesota has renewed public debate about how, when and why the powerful technology is deployed.

Since 2018, police have run nearly 1,000 searches through the Hennepin County Sheriff's Office's facial recognition system, with more than half of those searches coming this year alone, according to new county figures.

And while the Twin Cities still lags behind other jurisdictions in using the technology, its increasing use here has caught the attention of civil liberties advocates, who say it's a threat to privacy and is discriminatory. The county records also reveal that the Minneapolis Police Department was using the technology in 2018, when a spokeswoman denied that was happening.

The county figures offer a glimpse into the scope of police use of the technology, which employs machine learning algorithms to detect human faces from surveillance cameras, social media and other sources and screen them against a countywide mug shot database.

They show that outside agencies used the Sheriff's Office facial recognition platform 516 times through the first nine months of 2020, far more than any previous year. The Sheriff's Office processed 308 such requests all of last year, up from 18 in 2015.

The program's users range from the obvious — St. Paul police, with 83 requests in five years — to the obscure — the state Department of Commerce, which used facial recognition as part of an insurance fraud investigation. Regional drug task forces were also regular clients.

Among federal agencies, the Drug Enforcement Agency has used the system 14 times, according to the figures, the FBI six times, Homeland Security twice, the U.S. Postal Inspector once, and the Bureau of Alcohol, Tobacco, Firearms and Explosives 10 times, all in the past two years.

The agency's biggest client, the Minneapolis Police Department, has for years deflected questions about its use of the technology. In 2018, a spokeswoman told the Star Tribune that the department had no plans to use the technology, in response to questions for a story about a City Council member's proposal to restrict its use.

The county records show that MPD investigators used the system's software 237 times between Oct. 1, 2015, and Sept. 28, 2020. The county figures, obtained by the Star Tribune through a data practices request in September, reflect every request from an outside agency to use facial recognition software, but they don't provide details about the underlying cases.

A spokesman said this week that he didn't have data on how often the technology has helped solve a case and that he was "unaware" of its use at any protest or demonstration, as some activists have suspected. But, he said, a positive identification through facial recognition alone doesn't constitute probable cause and would require more legwork before an arrest could be made.

"Please note that it is not uncommon for us to share with all of our law enforcement partners photos of persons identified as likely suspects in crimes," said the spokesman, John Elder. "It is possible that when we send them out, other agencies are utilizing whatever software they have."

Munira Mohamed, a policy associate with the state branch of the American Civil Liberties Union, said the ACLU and others say there's a lack of transparency in police facial recognition use, despite nagging concerns over privacy and false matches.

Study after study, she said, has shown the technology is particularly problematic in identifying people of color, women, the "young, old, trans and LGBTQ — basically anyone who's not a white man."

Mohamed said the group has been working with Minneapolis Council Member Steve Fletcher and other officials to draft a citywide moratorium on facial recognition, at least until new standards governing their use are set.

"It's such an opaque, dark universe of stuff, you never really know what sort of technology is being used, you never really know how it's being funded and … how it's being deployed," she said, adding that the proposed ban comes after Minneapolis lawmakers passed a resolution pledging to protect citizens' privacy. She said that the moratorium, an early version of which is expected later this month, may not address third-party facial recognition platforms.

At the same time, efforts to address its use at the state level are still in their infancy, she said.

The office of then-Hennepin County Sheriff Rich Stanek obtained the technology in 2012, an effort first uncovered years later in a lengthy court battle by Tony Webster, a local investigative journalist and privacy advocate.

According to internal documents obtained by Webster, the agency uses software from Cognitec, a German R&D firm. Like most systems, Cognitec works by analyzing people's unique facial measurements — noting, for instance, the space between the eyes or the contour of the lips, and breaking them down into long strands of code called "feature vectors" or "faceprints" — to create a virtual map, which can be compared against the county's ever-growing database of more than 1.4 million mug shots.

In many ways, facial recognition tools are already part of everyday life, transforming the way people check in at airports, unlock smartphones or tag their friends in photos on social media.

But as the technology becomes more powerful, cities must try to balance its potential to improve public services with its ability to cause harm, says Shobita Parthasarathy, a public policy professor at the University of Michigan. Like other cutting-edge technology, its reliance on algorithms, not humans, gives it the illusion of objectivity.

"Technologies are the product of the society that builds them, and because our society has biases, our technologies will also have biases, full stop," said Parthasarathy, who recently co-authored a study on the emergence of facial recognition in schools.

And yet, she adds, there's no federal statute on facial recognition, whose use is governed by a patchwork of state and local laws.

Parthasarathy said she worries not only about the potential expansion of government surveillance but about the long-term psychological toll on people of knowing they're being watched every time they leave their homes.

Hennepin County Sheriff David Hutchinson, who has continued the program he inherited from his predecessor, was not available for comment, according to a spokesman.

In a statement, Hennepin County Sheriff's Capt. Spencer Bakke said the agency itself had used the technology 73 times over the past five years, but it was not clear how many of those cases led to arrests.

He said a "small number of trained users" from the agency's Criminal Intelligence Division have access to the system, which costs $22,500 a year to operate.

Bakke said the technology is exclusively used in criminal and death investigations and is not connected to surveillance cameras, adding that "there is no ability to operate any 'choke points,' nor can it be used for active surveillance."

Sheriff's officials disputed a report by BuzzFeed News from earlier this year that said they ran hundreds of facial recognition searches through Clearview AI, a controversial startup that has amassed a database of billions of photos scraped from Facebook, YouTube and Google. They said that in June 2019 a crime analyst from another agency who was assigned to the office signed up for a 30-day free trial version of the software, but that it "has not been used by anyone [with the Sheriff's Office] since this trial period."

Many in law enforcement have defended the technology as too important a tool to ignore in an increasingly wired world. With the help of facial recognition, even a grainy image captured on a security camera or social-media account can lead investigators to a suspect, as it did in the case of the man who brutally assaulted an elderly man after an argument aboard a Metro Transit bus last winter.

The defendant, Leroy Davis-Miles, in that case was recently sentenced to more than 13 years in prison for his role in the attack.