Skip to main content

Facebook detection challenge winners spot deepfakes with 82% accuracy

Technology NewsJune 4, 2020 / 5:38 PM / Updated 30 minutes agoFacebook to apply state media labels on Russian, Chinese outletsKatie Paul3 Min ReadSAN FRANCISCO (Reuters) - Facebook Inc (FB.O) will start labeling Russian, Chinese and other state-controlled media organizations, and later this summer will block any ads from such outlets that target U.S. users, it said on Thursday.FILE PHOTO: A Facebook sign is seen at the second China International Import Expo (CIIE) in Shanghai, China November 6, 2019. REUTERS/Aly Song
A Facebook sign is seen at the second China International Import Expo (CIIE) in Shanghai, China November 6, 2019.
Image Credit: REUTERS/Aly Song

Join us in Atlanta on April 10th and explore the landscape of security workforce. We will explore the vision, benefits, and use cases of AI for security teams. Request an invite here.


Deepfake Detection Challenge partners — including Facebook, Partnership on AI, and others — have announced the contest winners. The top-performing model achieved 82.56% deepfake detection against a public data set of 100,000 videos created for the project. More than 2,000 participants contributed over 35,000 models to the competition, which started in December and concluded May 31. Top-performing teams split $1 million in prize money.

“The first entries were basically 50% accuracy, which is worse than useless, and the first real ones were like 59% accuracy, and the winning models were 82% accuracy,” Facebook CTO Mike Schroepfer told reporters.

Schroepfer said Facebook intends to use the findings to improve its own deepfake detection tech in production today. Deepfake detection is an area of particular concern ahead of U.S. presidential elections in November.

All the winners used the EfficientNet network architecture to construct their models. Facebook engineers found that top-performing models tended to use a form of data augmentation or augmentations that blend fake and real faces.

VB Event

The AI Impact Tour – Atlanta

Continuing our tour, we’re headed to Atlanta for the AI Impact Tour stop on April 10th. This exclusive, invite-only event, in partnership with Microsoft, will feature discussions on how generative AI is transforming the security workforce. Space is limited, so request an invite today.
Request an invite

The Deepfake Detection Challenge data set will be open-sourced, with details shared next week at the Computer Vision and Pattern Recognition (CVPR) conference. CVPR was originally scheduled to be held in Seattle but will now take place entirely online starting Sunday.

“Honestly, prior to all of this, if I just wanted to download a good deepfake detector from GitHub, it didn’t really exist like nine months ago — I think that’s a problem. And so just actually having a baseline system that works reasonably well, that gives people a starting point, I think is probably at this point more important than worrying about … adversarial examples,” Schroepfer said about open-sourcing the data set.

Competing teams used the Deepfake Detection Challenge Data Set to train their models. The data set is a collection of 100,000 videos created by actors who signed consent agreements. With more than 3,500 actors, it includes 38 days of video.

Facebook and partners launched the Deepfake Detection Challenge last fall with a group of partner organizations that includes the BBC, the New York Times, several academic institutions, and the Partnership on AI’s Steering Committee on AI and Media Integrity. Amazon Web Services (AWS) contributed $1 million in cloud credits, while Schroepfer said Facebook contributed roughly $10 million to the project.

With a similar goal of using AI to better moderate content, last month Facebook launched the Hateful Memes Challenge.

The deepfake detection and hateful meme news comes as Facebook is being challenged on several fronts for its record of profiting from hate. Facebook CEO Mark Zuckerberg recently defended President Trump’s right to post a call for military shooting of looters during large-scale protests against white supremacy and racism following the killing of George Floyd. In the post, Trump used a phrase with ties to bigotry dating back to the 1960s. Twitter labeled the same language in a tweet as “glorying violence.”

Facebook employees held a virtual walkout in response to the company’s position, and two senior employees reportedly threatened to resign, according to the New York Times. A Wall Street Journal report late last month asserted that Facebook knowingly profits from a divisive recommendation algorithm that promotes extremism and hate and that the company has avoided change to prevent any potential backlash from conservative politicians.

VB Daily - get the latest in your inbox

Thanks for subscribing. Check out more VB newsletters here.

An error occured.