How AI Can Rid The Internet of Fake News and Bias

Find out how AI can tackle fake news and bias to create authentic experiences.

January 18, 2023

There is always a chance that the information you hear or read may not be accurate, whether it comes from a physical newspaper or magazine, an internet source, or the radio. False information has existed for as long as human culture, but the sheer volume of information we receive from the linked, online world makes us particularly susceptible to inadvertently ingesting material that has been twisted or falsified. Garry M. Paxinos, CTO of netTALK CONNECT and NOOZ.AI, shares how AI can help tackle the issue of fake news and the complexities of bias.

Consumers are accustomed to having their opinions influenced by what they read, see, and hear online, such as through influencer marketing or celebrity endorsements. Opinions have a lot of power, whether or not facts support them, and a lot of false news depends on stirring up strong emotions. We frequently need to pause to consider if what we have heard or read is accurate when our attention and feelings are involved.

 According to MIT researchersOpens a new window , it takes truthful news six times longer to reach 1,500 people on Twitter than fake news. Furthermore, the chain length—the number of people who shared a social media post—of accurate vs. fake news was highly disproportionate. Verifiable news never exceeded 10, but it increased to 19 for false news. This is partially due to bot swarms used by malevolent actors to spread incorrect information.

Disinformation now affects people, governments, and enterprises on a worldwide scale. Finding and separating so-called “fake news” in today’s expanding digital information economy is a significant task. However, artificial intelligence (AI) improvements may make it easier for online information users to distinguish between reality and fabrication.

Let’s tackle how AI can be harnessed to stop the spread of misinformation and make the internet a more balanced place to source news.

See More: Three Ways AI Is Disrupting the Human Experience In Enterprises

Where Does AI Fit in with Assessing Articles?

By using advanced algorithms to discover and reach the demographic that may easily absorb a message, legitimate companies utilize AI to locate and target the most probable consumers of a message or point of view. Google, for example, already implemented its RankBrain algorithmOpens a new window in 2015 to refine its capabilities to recognize authoritative results.

To distinguish computer-generated material from human-produced articles, AI-based technologies can perform linguistic analysis on textual content and find clues such as word patterns, syntactic structure, and readability. These algorithms can analyze any text to find instances of hate speech by looking at word vectors, word placement, and connotation.

New Applications and Projects

Fake news sources often come from one illegitimate source before the information proliferates. The Fandango project looks for social media postings or internet sites with identical terms or claims after using articles that human fact-checkers have identified as false. This enables the journalists and specialists to track the origins of the false information and eliminate any hazards before they have a chance to get out of hand.

Politifact, Snopes, and FactCheck use human editors to conduct the main investigation necessary to confirm the veracity of a report or an image. Once a fake has been identified, AI systems search the web for similar information that can cause social unrest. Additionally, applications can then assign a reputation score to a website article if it is determined that the material is authentic.

Several AI engines currently use the following measures in their evaluation score:

  • Sentiment analysis: A journalist’s attitude toward the news in general or the particular topic they write about.
  • Opinion analysis: Examining a journalist’s work for personal feelings, viewpoints, convictions, or assessments
  • Revision analysis: Examining how a news story has changed over time and how it has manipulated public perception and mood.
  • Propaganda analysis: Detecting up to 18 different persuasion strategies using propaganda analysis can help you spot potential false information.

All four of these combined can give a full picture of how trustworthy an article is and what we’re up against.

The Challenges with AI and How to Overcome Them

Language models like GPT-3 can already create articles, poetry, and essays based on a one-line cue. AI is close to perfecting the production of material that resembles a person. AI has made it so easy to manipulate all kinds of information that open-source programs like FaceSwap and DeepFaceLab may make inexperienced new users potentially the center of social unrest.

These issues are made worse because these semantic analysis algorithms cannot decipher the substance of hate speech pictures that have not been altered but are instead being distributed with harmful or inaccurate context.

Once fraudulent content has been found, removing it is more challenging than it would seem. Organizations may be charged with censorship and with attempting to hide information that one group or another believes to be untrue. Finding a balance between the right to free expression and the battle against false information and fake news is difficult.

See More: 5 Ways To Avoid Bias in Machine Learning Models

AI also typically lacks the ability to recognize humor and parody. Therefore, it may classify fake news or disinformation as malevolent misinformation if it is utilized in a lighthearted or joking manner. But there’s no denying that AI can be a huge asset in the fight against fake news. Technology is crucial in the battle against false internet news because of the enormous volume of material that it can process.

Fake news is not a problem that solely algorithms can fix—there needs to be a change of mindset in how we approach the knowledge-acquiring process. While the crowdsourcing of collaborative knowledge among professional groups is essential to evaluating raw data, communities of knowledgeable users may also support ethical monitoring operations.

A lack of proactive actions involving all parties can hasten the loss of public confidence in institutions and the media, which is a prelude to anarchy. AI-based technology must be our partner in the fight against internet misinformation until humans are able to develop the ability to analyze online content objectively.

Do you think AI can really help reduce the spread of biased content and fake news? Tell us on FacebookOpens a new window , TwitterOpens a new window , and LinkedInOpens a new window .

Image Source: Shutterstock

MORE ON AI BIAS

Garry M. Paxinos
Garry M. Paxinos, CTO of netTALK CONNECT and NOOZ.AI. Gary also is the CTO at NT CONNECT, CTO at netTALK MARITIME, CTO at Axios Digital Solutions, Head of Technology at Sezmi Corporation, SVP and Chief Technologist at US Digital Television. He is currently holder of numerous patents
Take me to Community
Do you still have questions? Head over to the Spiceworks Community to find answers.