Skip to main content

NIST study finds that masks defeat most facial recognition algorithms

Image Credit: Khari Johnson / VentureBeat

Join us in Atlanta on April 10th and explore the landscape of security workforce. We will explore the vision, benefits, and use cases of AI for security teams. Request an invite here.


In a report published today by the National Institutes of Science and Technology (NIST), a physical sciences laboratory and non-regulatory agency of the U.S. Department of Commerce, researchers attempted to evaluate the performance of facial recognition algorithms on faces partially covered by protective masks. They report that the 89 commercial facial recognition algorithms from Panasonic, Canon, Tencent, and others they tested had error rates between 5% and 50% in matching digitally applied masks with photos of the same person without a mask.

“With the arrival of the pandemic, we need to understand how face recognition technology deals with masked faces,” Mei Ngan, a NIST computer scientist and a coauthor of the report, said in a statement. “We have begun by focusing on how an algorithm developed before the pandemic might be affected by subjects wearing face masks. Later this summer, we plan to test the accuracy of algorithms that were intentionally developed with masked faces in mind.”

The study — part of a series from NIST’s Face Recognition Vendor Test (FRVT) program conducted in collaboration with the Department of Homeland Security’s Science and Technology Directorate, the Office of Biometric Identity Management, and Customs and Border Protection — explored how well each of the algorithms was able to perform “one-to-one” matching, where a photo is compared with a different photo of the same person. (NIST notes this sort of technique is often used in smartphone unlocking and passport identity verification systems.) The team applied the algorithms to a set of about 6 million photos used in previous FRVT studies, but they didn’t test “one-to-many” matching, which is used to determine whether a person in a photo matches any in a database of known images.

Because real-world masks differ, the researchers came up with nine mask variants to test, which included differences in shape, color, and nose coverage. The digital masks were black or a light blue approximately the same color as a blue surgical mask, while the shapes ranged from round masks covering the nose and mouth to a type as wide as the wearer’s face. The wider masks had high, medium, and low variants that covered the nose to varying degrees.

VB Event

The AI Impact Tour – Atlanta

Continuing our tour, we’re headed to Atlanta for the AI Impact Tour stop on April 10th. This exclusive, invite-only event, in partnership with Microsoft, will feature discussions on how generative AI is transforming the security workforce. Space is limited, so request an invite today.
Request an invite

NIST facial recognition masks

According to the researchers, algorithm accuracy with masked faces declined “substantially” across the board. Using unmasked images, the most accurate algorithms failed to authenticate a person about 0.3% of the time, and masked images raised even these top algorithms’ failure rate to about 5%, while many “otherwise competent” algorithms failed between 20% and 50% of the time.

In addition, masked images more frequently caused algorithms to be unable to process a face, meaning they couldn’t extract features well enough to make an effective comparison. The more of the nose a mask covered, the lower the algorithm’s accuracy; accuracy degraded with greater nose coverage. Error rates were generally lower with round masks and black masks as opposed to surgical blue ones. And while false negatives increased, false positives remained stable or modestly declined. (A false negative indicates an algorithm failed to match two photos of the same person, while a false positive indicates it incorrectly identified a match between photos of two different people.)

“With respect to accuracy with face masks, we expect the technology to continue to improve,” continued Ngan. “But the data we’ve taken so far underscores one of the ideas common to previous FRVT tests: Individual algorithms perform differently. Users should get to know the algorithm they are using thoroughly and test its performance in their own work environment.”

The results of the study align with a VentureBeat article earlier this year that found that facial recognition algorithms used by Google and Apple struggled to recognize mask-wearing users. But crucially, NIST didn’t take into account systems designed specifically to identify mask wearers, like those from Chinese company Hanwang and researchers affiliated with Wuhan University. In an op-ed in April, Northeastern University professor Woodrow Hartzog characterized masks as a temporary technological speed bump that won’t stand in the way of increased facial recognition use in the age of COVID-19. Already, companies like Clearview AI are attempting to sell facial recognition to state agencies for the purpose of tracking people infected with COVID-19.

Perhaps in recognition of this, this summer, NIST plans to test algorithms created with face masks in mind and conduct tests with one-to-many searches and other variations.

VB Daily - get the latest in your inbox

Thanks for subscribing. Check out more VB newsletters here.

An error occured.