Why GenAI Image Tools Have Created A New Demand For Authenticity

From rapid proliferation to legal cases, explore the complex battlefield of content creation where GenAI meets ethics

April 15, 2024

AI Content

Rob Sewell, CEO of SmartFrame, explores the impact of GenAI tools on visual content creation, from democratization to concerns over authenticity. Addressing brands, publishers, and regulators’ roles in intellectual property issues and the potential for misinformation.

Core keywords: GenAI, visual content, authenticity, ethical concerns, consumer trust, AI-generated images, content creation

Generative artificial intelligence (GenAI) has quickly revolutionized the creative industry, with a recent proliferation of models joining frontrunners DALL·E, MidJourney, and Stability AI. These rapidly evolving tools, requiring low technical expertise and a human prompt, are being democratized, with more than 15 billion images created using text-to-image algorithms in the year leading up to August 2023. 

For context, it took photographers 150 years, from the first photograph taken in 1826 until 1975, to reach 15 billionOpens a new window .

Alongside myriad uses across fine art, interior design, and medicine, the growing demand by brands for visual content and the growth of the GenAI market has made it even easier to create immersive, engaging, and personalized images that captivate customers.

Fun Creative Tool Or An Attack On Human Creativity?

Despite its ability to supercharge user engagement, GenAI software’s training on millions of digital images makes it deeply problematic. Any artist or photographer whose work is online may potentially have their work trawled by these tools and incorporated into other images.

Digging deeper, the proliferation of AI-generated visuals threatens to undercut human creativity, especially when they start to win photography and art competitions. Tensions rose further last year when Stephen Thaler, whose algorithm alone generated the artwork, ‘A Recent Entrance into Paradise,’ argued that its author was AI and that copyright should be held by Thaler himself as the algorithm’s owner. The courts rejected this, but the judge’s allusion to “new frontiers in copyright as artists put AI in their toolbox” points to future nuance. 

So far, the creators leading the rebellion against technology they see as anti-artist have focused on the scraped datasets that feed GenAI imaging models. Artists have brought a class action lawsuit against Midjourney, DeviantArt, and Stability AI for copying images for training. The debate was lent new oxygen in January when details of the names of more than 16,000 artists allegedly used to train Midjourney were leaked, among them Frida Kahlo, Damien Hirst, and David Hockney.

Meanwhile, Getty Images has filed a case against Stability AI, the creator of the AI art generator Stable Diffusion. The company claims Stability AI used 12 million images from its database without permission or compensation, flouting copyright laws and trademark protection. 

GenAI’s Blurring Of Truth And Fiction Highlights The Urgency For Transparency

Beyond undercutting intellectual property, some forms and uses of AI-generated images are especially concerning. At its most sinister, the manipulation of human images is a vehicle for misinformation, disinformation, and deepfakes. 

This has huge implications for 2024, when 49% of the global population will head to the polls, especially in light of a study showing that we can detect a fake image of a real-world scene only 60% of the time. To top it all, we now see the risk of bias in GenAI and its recent overcorrection by Google’s Gemini AI platform.

The explosion of synthetic content has also meant that audiences face new challenges in verifying content online. This has tangible repercussions on consumer trust, with Gartner Marketing Predictions for 2024 stating that half of consumers will cut or significantly limit their social media interactions by 2025, citing misinformation as a concernOpens a new window

Awareness of the prevalence of GenAI and the associated consumer scrutiny is leading to a push for clarity on when it has been used. A study conducted by the IPA (Institute of Practitioners in Advertising) in 2023 showed that 74% of all consumers say AI-generated content should be disclosed as such. By 2026, Gartner has revealed that over half (60%) of CMOs will adopt content authenticity technology to protect audience trust. 

The onus is now on brands and publishers to listen to the consumer on authenticity issues and its importance for revenue-critical brand trust.

See More: Evolving C-Suite: How to Lead in the Era of Gen AI

How Brands And Publishers Can Reclaim Consumer Trust

GenAI is here to stay. As such, governments are scrambling to address the risks it brings while embracing innovation. Detecting GenAI content was a key tenet of Joe Biden’s Executive Order on Artificial Intelligence last year, which Adobe, Meta, and OpenAI, among others, have all committed to following. Elsewhere, the newly introduced EU AI Act is designed to ensure human oversight of the technology, including stricter rules for AI tools that present a bigger risk to society. The act focuses on compliance with copyright laws and transparency around the materials used for training these models. The UK, meanwhile, has no regulation yet planned. 

Brands and publishers, meanwhile, have been battling misinformation and confusion around the origins of media by introducing measures to increase trust and transparency. In some areas, the AI industry has adapted to accommodate this. AI image generator BRIA, for example, uses only licensed training data, pays for its image use, and displays the training images used when an image is generated. AI watermarking — embedding a signal within an image to identify whether it is AI-generated — has also become a popular way to add a layer of transparency. 

However, one of the most prominent protection measures is using Content Credentials to authenticate images. As a founding member of the Content Authenticity Initiative, Adobe launched its “nutrition label” for images in October. The initiative was set up in 2019 to promote industry standards for authenticating digital media. Its Content Credentials “CR” icon empowers viewers to inspect an image’s origins and editing history. This applies to AI-generated images, including those within Adobe’s Firefly AI-generation tool.

The CR icon is being used in Microsoft Bing’s AI-generated images, by Publicis Groupe, in images generated on OpenAI’s ChatGPT website, and, most recently, as part of the BBC’s move to authenticate images and video on their news site. Meta is now watermarking photos across its platforms. At the same time, Google’s DeepMind has launched its SynthID watermark for AI-generated images in a detectable way through a dedicated AI tool.

Detractors argue that watermarks can be manipulated. However, they are a rapid response to the challenges of GenAI-related misinformation while still rewarding human creativity. These efforts are likely to evolve as the technology improves. For example, a more robust approach to deepfakes is emerging, with Meta developing tools that detect synthetic media even if its metadata has been altered. 

Balancing Innovation and Authenticity

AI-generated imaging has carved out a new landscape. With stellar visuals at their fingertips, the scope for brands and publishers to engage audiences seems limitless. Yet the very prevalence and deceptive power of these images cast a clearer light on the importance of authenticity and the provenance of content for consumers. This is the bedrock of consumer trust, which is, in turn, the foundation of commercial success.

The industry is seeking a balance between mitigating risks and embracing innovation at the hands of GenAI. Watermarking and detection tools will emerge alongside the material they’re applied to, and there is also a need for mandatory disclosure on how AI models are trained alongside international standards. 

For now, brands, publishers, and companies that process visual content must sharpen their focus on meeting consumer demand for authenticated content and building trust. Even in an era of innovative, immersive, and engaging content, this will be their mainstay.

How can businesses navigate the landscape of AI-generated imaging while ensuring authenticity and trust with consumers? Let us know on FacebookOpens a new window , XOpens a new window , and LinkedInOpens a new window . We’d love to hear from you!

Image Source: Shutterstock

MORE ON ETHICAL AI 

Rob Sewell
Rob is CEO of SmartFrame Technologies, an innovative image-streaming service that brings control and fresh monetisation opportunities to image owners, publishers and advertisers. Based in London, Rob joined as CEO in 2015 as a highly experienced entrepreneur and business leader to help define the company’s commercial strategy, take the product to market, and raise the required funding. His previous experience includes multiple start-up roles such as Founder of Rascals Night Club, Founder and CEO at My Phone Club, Co-Founder and Sales Director at Wellbeing Network, and Founder at Holistic Personal Training Services.
Take me to Community
Do you still have questions? Head over to the Spiceworks Community to find answers.