The Future Is Here
We may earn a commission from links on this page

Dallas Police Used Face Recognition Software Without Authorization, Installed on Personal Phones

Image for article titled Dallas Police Used Face Recognition Software Without Authorization, Installed on Personal Phones
Photo: Justin Sullivan (Getty Images)

Dallas police officers used unauthorized facial recognition software to conduct between 500 and 1,000 searches in attempts to identify people based on photographs. A Dallas Police spokesperson says the searches were never authorized by the department, and that in some cases, officers had installed facial recognition software on their personal phones.

The spokesperson, Senior Cpl. Melinda Gutierrez, said the department first learned of the matter after being contacted by investigative reporters at BuzzFeed News. Use of the face recognition app, known as Clearview AI, was not approved, she said, “for use by any member of the department.”

Advertisement

Department leaders have since ordered the software deleted from all city-issued devices.

Advertisement

Officers are not entirely banned from possessing the software, however. No order has been given to delete copies of the app installed on personal phones. “They were only instructed not to use the app as a part of their job functions,” Gutierrez said.

Advertisement

Clearview AI did not respond Wednesday when asked if it had revoked access for officers whose departments say their use is unauthorized.

The Dallas Police Department says it has never entered into a contract with Clearview AI. Yet officers were still able to download the app by visiting the company’s website. According to BuzzFeed, officers who signed up for a free trial at the time were not required to prove they were authorized to use the software.

Advertisement

What’s more, emails obtained by the outlet show Clearview AI’s CEO, Hoan Ton-That, has not been opposed to helping officers register for his software with non-work related email.

During an internal review, Dallas officers told superiors they had learned about Clearview through word of mouth from other officers.

Advertisement

BuzzFeed News first revealed Clearview AI was being used in Dallas on Tuesday following a yearlong investigation into the company. The Dallas Police Department is only one of 34 agencies to acknowledge employees had used the software without approval.

Using data supplied by a confidential source, reporters found that nearly 2,000 public agencies have used Clearview AI’s facial recognition tool. The source was granted anonymity, BuzzFeed said, due to their fear of retribution.

Advertisement

Nearly 280 agencies told the reporters that employees had never used the software. Sixty nine of those later recanted. Nearly a hundred declined to confirm Clearview AI was used and more than 1,160 organizations didn’t respond at all.

Advertisement

The BuzzFeed data, which begins in 2018 and ends in February 2020, also shows the Dallas Security Division, which oversees security at City Hall, conducted somewhere between 11 and 50 searches. A spokesperson said the division has no record of Clearview AI being used.

Dallas City Mayor Eric Johnson did not immediately respond to an email. A city council member said they needed time to review the matter before speaking on the record.

Advertisement

Misuse of confidential police databases is not an unknown phenomenon. In 2016, the Associated Press unearthed reports of police regularly accessing law enforcement databases to glean information on “romantic partners, business associates, neighbors, journalists and others for reasons that have nothing to do with daily police work.”

Between 2013 and 2015, the AP found at least 325 incidents of officers being fired, suspended, or forced to resign for abusing access to law enforcement databases. In another 250 cases, officers received reprimands or counseling or faced lesser forms of discipline.

Advertisement

Today, facial recognition is considered one of the most controversial technologies used by police. The American Civil Liberties Union has pressed federal lawmakers to impose a moratorium on its use nationwide citing multiple studies showing the software is error-prone, particularly in cases involving people with dark skin.

A study of 189 facial recognition systems conducted by a branch of the U.S. Commerce Department in 2019, for example, found that people of African and Asian descent are misidentified by software at a rate 100 times higher than white individuals. Women and older people are at a greater risk of being misidentified, the tests showed.

Advertisement

One system used in Detroit was estimated to be inaccurate “96 percent of the time” by the city’s own police chief.

Clearview AI, which is known to have scraped billions of images of people off social media without their consent or the consent of platforms, has consistently claimed its software is bias-free and, in fact, helps to “prevent the wrongful identification of people of color.”

Advertisement

Ton-That, the CEO, told BuzzFeed that “independent testing” has shown his product is non-biased; however, he also ignored repeated requests for more information about those alleged tests. The news outlet was also able to send 30 images of people to a source with access to the system and included several photos of computer-generated faces. Clearview AI falsely matched two of the fake faces—one of a woman of color and another of a young girl of color—to images of real people.

In 2019, more than 30 organizations with a combined membership of 15 million people called on U.S. lawmakers to permanently ban the technology, saying that no amount of regulation would ever adequately shield Americans from persistent civil liberties violations.

Advertisement

Correction: A previous version of this article mistakenly said that Clearview AI had “scraped billions of images of people off social media with their consent.” The images were scraped without their consent. We regret the error.