The Future of Content Generation: The Rise of NLG

Explore the future of content generation with natural language generation and other emergent tech.

November 30, 2022

Natural language generation (NLG) is a key element of the content creation process that is only now being recognized as a leading technology marketers should leverage. Dr. Panagiotis Angelopoulos, CTO at Persado discusses the rise of NLG, its primary use cases and how it can drive the achievement of business goals. 

Language is credited as one of humankind’s greatest achievements; it enables us to communicate complex ideas, express our deepest emotions and connect with each other. So, it is no surprise that one of the most sought-after applications of artificial intelligence (AI) is the ability to mimic how humans communicate verbally and in writing. If we were able to build AI technology capable of communicating ideas the same way humans can, that would lead to one of the greatest achievements in computer science. 

Enter Language Models 

Throughout the last 20 years, technology experts have focused on teaching machine language to humans. As this lesson has been largely achieved to date, the next goal is teaching human language to machines. This ongoing pursuit has already resulted in the creation of language models, a way to mathematically model spoken and written language. The ultimate goal is for AI to use language models to generate human-level language. 

Initially, language models were based on statistical methods of calculating the frequency of word sequences. While this approach could model some basic properties of a language, it lacked the ability to generate meaningful, on-message text. It has only been a few years since developments in neural networks allowed us to build more sophisticated language models that can use the context and long dependencies needed to generate human-level, quality language. The explosion in deep learning research over the past decade has helped us develop expansive models with more layers of understanding and artificial neurons that correspond to a higher level of awareness, effectively capturing most of the nuances of human language. 

Arguably, the most well-known language model to date has been developed by OpenAI. Its latest iteration, GPT-3 has 175 billion parameters—the part of the model that’s learned from historical training data— and has been trained on over 45 Terabytes of text data using sources such as the common web crawl, Wikipedia and books such as reference points. OpenAI proved that a model such as this one could learn to perform many tasks like summarizing question-answering information without explicitly being trained on the task — with a very high level of accuracy. This level of creativity and quality of the output is truly impressive and often indistinguishable from what a human could come up with. 

See More: Why Natural Language Processing is NOT the Future of Business Intelligence

Technological Concerns for Standardized Language Models 

Most people that have interacted with GPT-3 have been fascinated by its ability to write coherent, high-quality language, but two main questions remain: “Should we trust it?” and “Are there any dangers when using it”? One answer to these questions is that, unfortunately, even though the quality of the text is extremely good, it is obvious that our scientific methods have not yet reached the level where the model can actually understand what it is writing. It’s putting together words and phrases that make perfect sense but can easily, albeit inadvertently, mix verified and fictitious facts in a way that is convincing. This can lead to dangerous situations, such as the technology writing news articles that contain misinformation and spread false news. While AI is a great tool for inspiration and allows humans to accelerate their workflows and focus on the most important aspects of their jobs, it still needs constant supervision with checks and balances from humans. 

A perfect example of why these checks and balances are necessary when utilizing legacy language models is that their information relies on timely training data. It might not know who the latest U.S. President is or that society has been living through a treacherous pandemic for over two years. 

The Role of NLG  

Although larger language models do their best to learn all they can from aggregate human knowledge—and are very good generalists because of these learnings—they are not the solution to every problem. You wouldn’t want a very generic model writing content for your enterprise communication where language needs to adhere to strict brand guidelines, especially when the goal is to achieve maximum returns. 

I mention that because as the momentum around NLG adoption continues and is further recognized as a valuable tool, executives and marketing leaders (and their teams) will need to evolve to work alongside AI to create optimal customer experiences. Recent dataOpens a new window shows that 53.9% of US-based business executives are already leveraging AI or machine learning to offer a personalized experience to their customers, and this figure is likely to increase dramatically in the coming months and years. Personalized digital communications present the largest opportunity to drive experiences that attract and build customer lifetime value. However, digital overload is overwhelming customers resulting in fast declining conversion rates. And while personalization drives tremendous value, existing personalization approaches fall short. The reason is that personalized offers and incentives fail to address the most important factor driving conversion: motivation. This is where AI & NLG come into play. The challenge is typically not an organization’s offer or value proposition but rather the need for language that motivates engagement and action on a personal level. 

Specialized models developed specifically for enterprise communications can remedy this challenge by generating language that speaks to each customer as if the company knows them personally. This task can only be achieved with AI and is usually referred to as a “narrow AI implementation.” It is accomplished by developing a unique classification of language for enterprise communications (e.g., marketing, customer service) and tagging a vast amount of communicative examples with behavioral concepts such as emotions, narratives, and other language and behavioral concepts. The models are then further refined with the resulting interactions between brands and customers. Unlike generic models, these are designed for a specific purpose and evolve based on how customers interact with the output. 

The opportunities for companies using this type of NLG are immense. As Boston Consulting Group pointed out in a recent article about NLG and personalization, “a major new force is taking shape in personalization, one that could generate as much as $200 billion in incremental revenue for the Fortune 500 and up to $800 billion worldwide.” This figure alone shows us that now is the time for companies to evaluate these capabilities if they want to drive tectonic impact on their enterprise-wide customer and consumer engagements. 

The future of efficient and effective content generation is now. The technologies capable of developing high-value language and messaging are already accessible, and the companies that utilize them will be able to multiply returns and quickly recover massive value in a short timeframe. That’s even more critical in times of economic turbulence, such as what we are experiencing now. Forward-looking companies must implement available tools such as NLG to streamline and scale their operations as well as surpass their competitors in the race for market share and customer loyalty. 

Do you think NLG and other content generation tech will soon replace human content production? Tell us on FacebookOpens a new window , TwitterOpens a new window , and LinkedInOpens a new window .

MORE ON NATURAL LANGUAGE: 

Image Source: Shutterstock

Dr. Panagiotis Angelopoulos
Panagiotis has more than 15 years of experience in developing machine learning and AI applications. He holds an MSc in Applied Mathematics and a PhD in Statistics, and is considered one of the leading experts in the design and analysis of experiments. In his role as CTO at Persado, he leads the development of the AI technology driving Persado’s platform, managing the teams of Data Science and Content together with Engineering, QA & IT functions. His passion for data and machine learning keeps him motivated to find new applications for the commercialization of machine learning methods in business.
Take me to Community
Do you still have questions? Head over to the Spiceworks Community to find answers.