We Need Human Intelligence for Artificial Intelligence

Fake news has dominated news cycles and pushed accountability for developing technology that flags and removes fake news onto tech and social media platforms.

Nicolás Ávila, Chief Technology Officer, North America

March 8, 2024

5 Min Read
uman And Robot or Robotic Automation  illustration
Brain light via Alamy Stock

Is the video we received on WhatsApp authentic? Has the viral image on X (formerly known as Twitter) been manipulated? Is the audio genuine, or could it have been created using software? 

Artificial intelligence has the potential to transform but also to distort reality through the creation of deep fakes, the distribution of fake news, and other tools aimed at confusing audiences. We’ve already seen the harmful effects of fake news when, in 1938, panic swept the streets of New York and New Jersey during Orson Welles’ radio program “The War of the Worlds,” narrating an alien invasion of the United States. Despite the broadcaster's warnings at the beginning and end of the segment regarding the fictional nature of the story, millions of confused listeners believed it to be true. The following day, the New York Times headline read, “Terrified radio listeners take a war play as something real.” Without intending to, Wells started the modern rumor: fake news. 

Since the 1930s, other instances of fake news have streamed across headlines and newscasts. Before the 1990s, many newscasters were able to correct and limit damage to false news reports quickly. However, in the modern digital age, news sources and articles are shared widely and with more speed than ever thought possible. The consequences of these stories can range from trivial matters, such as the AI-generated song that pretended to be Bad Bunny and angered the singer, to more serious issues like elections. The seriousness of credited news reporting is particularly relevant in a year when the highest authority will be elected in 70 countries.  

Related:Help Your C-Suite Colleagues Navigate Generative AI

Today, a single radio program like “The War of the Worlds” would not have the same impact. The media's simultaneity would allow someone to clarify it quickly on one of the many available platforms. However, the multiplication of channels combined with the creative capacity of artificial intelligence has another effect: Anyone can “lie” like Orson Welles, and not necessarily for cultural or artistic purposes. It is, therefore, no coincidence that the Global Risks Report, elaborated by the World Economic Forum, positions misinformation and disinformation as the No. 1 risk in the short term and the fifth over the next decade. 

As regulators, tech experts, and business leaders develop necessary safeguards for generative AI, organizations should also establish internal best practices for IT teams and other departments to mitigate the harmful effects of AI.   

Implementing Advanced Content Filtering Algorithms 

Related:Causal AI: AI Confesses Why It Did What It Did

Like with many tech advancements, GenAI can lead to the creation and distribution of misinformation. However, GenAI and AI-enabled tools can also provide the solution to mitigate the spread and development of deep fake photos or videos, inaccurate text-based articles, etc., with advanced AI filtering algorithms. As teams develop and learn more about these tools, they can detect machine-generated misinformation more quickly and easily and take action to remove or update any falsehoods. 

Establishing Collaborative Initiatives With Fact-Checking Organizations 

In the early days of AI adoption, safeguards and oversight programs were slower to incorporate, especially as all companies and industries learned the real risks and challenges of the tech. However, oversight is necessary to ensure high-quality AI products, including written content, data collection processes, customer recommendations, etc. With support from third-party vendors or dedicated internal employees, the data used within AI programs need to be verified, protected, and double-checked to ensure the best AI solutions are used.  

Reskill Current Workforces  

Once ready to broadly incorporate AI tools, organizations should appoint internal AI experts. These experts are dedicated employees who have learned and tested the tech within their daily activities and can give detailed tutorials across teams to share specific instructions for each team’s functions. Whether working in sales, marketing, engineering, or human resources, workers need to know the dos and don’ts of AI for their role but won’t be able to until someone else has paved the way.   

Related:How to Submit a Column to InformationWeek

Enhancing User Education and Awareness Programs 

Within corporate education or awareness initiatives, teams will be enabled with GenAI and AI tools to create a dynamic, user-centric approach, tailoring information to specific user needs and preferences. For example, if a member of the marketing team is learning about GenAI and AI tools, then the program will tailor its content to specific tasks or needs of the marketing team. This approach will significantly enhance engagement and understanding. In addition, teams should employ interactive workshops, online modules, and real-world case studies to give audiences practical insights, fostering a culture of continuous learning and adaptation in the ever-evolving technological landscape. 

Continuously updating these tutorials for employees and consumers will be key to long term success, especially as we continue to unlock more AI’s capabilities.  

Increase Scholarships and Accessibility to Skills and AI Training 

We need to foster continued career growth and interest by creating a pipeline of new workers to support these innovations. As business leaders, we are in a position to drive incoming employees by creating scholarships or offering internships to students to learn and even be trained in tech. For early scholarship programs, I recommend initiating partnerships with educational institutions and industry associations to amplify the impact of these initiatives. In addition, companies can offer mentorship opportunities within any scholarship programs, giving students hands-on projects and exposure to real-world applications of AI. This will not only attract talent but also ensure a well-rounded skill set that aligns with the evolving demands of the tech landscape. 

We all can agree these actions are important, but many companies can’t enact some or any of these due to consumer demand for speedy innovations, user resistance, and a strained, understaffed developer workforce. Despite these obstacles, it is incumbent on tech companies to lead the charge in responsibly developing and implementing AI. To do this, we must have human collaboration with AI -- this is how we will harness the potential of AI and ensure that we use AI for good while reducing harmful effects.  

It is the responsibility of tech developers and business leaders to consciously create AI and build in regular human touchpoints for monitoring harmful generated content. It is the responsibility of consumers and politicians to flag AI-generated content and support regulations for the tech. Working together, we can find the real value in AI. 

About the Author(s)

Nicolás Ávila

Chief Technology Officer, North America , Globant

As the Chief Technology Officer of North America, Nicolás Ávila leverages Globers’ technical expertise to find the most innovative solutions for the world’s top companies. His experience with Globant has led him to live and work in seven cities on transformational projects for top clients in banking, retail, hospitality, and media & entertainment. With over a decade of experience in software engineering, Nicolás came to Globant after leading technology teams at Motorola and HP.  

Never Miss a Beat: Get a snapshot of the issues affecting the IT industry straight to your inbox.

You May Also Like


More Insights