Skip to main content

Facebook researchers propose using language models for fact-checking

Join us in Atlanta on April 10th and explore the landscape of security workforce. We will explore the vision, benefits, and use cases of AI for security teams. Request an invite here.


In a paper published on the preprint server Arxiv.org, researchers at Facebook and Hong Kong University of Science and Technology propose using natural language models as fact-checkers, inspired by the fact that models trained on documents from the web display a surprising amount of world knowledge. Their proposed approach employs a verification classifier model that when given an original claim and a generated claim determines whether the claim is supported or refuted, or if the information is insufficient to make a call.

According to a survey commissioned by Zignal Labs, 86% of Americans who consume news through social media don’t always fact-check the information they read, and 61% are likely to like, share, or comment on any content suggested by a friend. Despite Facebook’s best efforts, fake news continues to proliferate on the platform, with misinformation about the pandemic and protests, for example, attracting thousands to millions of eyeballs.

The coauthors of this paper — who claim theirs is the first work of its kind — posit that language models’ ability to memorize information might improve the effectiveness of Facebook’s fact-checking pipeline. They also assert that the models might speed up fact verification by eliminating searches over massive spaces of documents and by automating the training and verification steps that are currently conducted by humans.

The researchers’ end-to-end fact-checking language model performs automatic masking, choosing to mask entities (i.e., people, places, and things) and tokens (words) that make use of its ability to recover structures and syntax. (This approach arose from the observation that factuality often depends upon the correctness of entities and the possible relations between them rather than how the claim is phrased, according to the researchers.) The model then obtains the top predicted token and fills in the masked bit to create an “evidence” sentence, after which it uses the claim and evidence to obtain entailment features by predicting the “truth relationship” between text pairs. For example, given the pair of sentences T and H, the model would assume “sentence T entails H” if a human reading T would infer that H is most likely true.

VB Event

The AI Impact Tour – Atlanta

Continuing our tour, we’re headed to Atlanta for the AI Impact Tour stop on April 10th. This exclusive, invite-only event, in partnership with Microsoft, will feature discussions on how generative AI is transforming the security workforce. Space is limited, so request an invite today.
Request an invite

The researchers conducted experiments on FEVER, a large-scale fact-checking data set containing around 5.4 million Wikipedia articles. With a publicly available pretrained BERT model as their fact-checking language model, they examined its accuracy.

The best-performing BERT model achieved 49% accuracy without the need for explicit document retrieval or evidence selection, according to the team, suggesting that it was at least as effective as the standard baseline (48.8%) and random baseline (33%). But it fell short of the state-of-the-art system tested against FEVER, which achieved upwards of 77% accuracy.

The researchers attribute the shortfall to the language model’s limitations — for instance, claims in FEVER with fewer than five words provide little context for prediction. But they say that their findings demonstrate the potential of model pretraining techniques that better store and encode knowledge and that they lay the groundwork for fact-checking systems built on models shown to be effective on generative question-answering.

“Recent work has suggested that language models (LMs) store both common-sense and factual knowledge learned from pre-training data,” the coauthors wrote. “[W]e believe our approach has strong potential for improvement, and future work can explore using stronger models for generating evidences, or improving the way we mask claims.”

VB Daily - get the latest in your inbox

Thanks for subscribing. Check out more VB newsletters here.

An error occured.