Artificial intelligence is flooding social media with low-quality content at an alarming rate, creating significant brand safety challenges even as it offers new tools to combat those very problems. But experts say the technology is currently doing more harm than good.
"As AI takes over and we have all this slop or synthetic content flooding our feeds and influencing the algorithms, trust is going to be impacted," said our analyst Bill Fisher on a recent episode of "Behind the Numbers." "Trust and quality of content could be really, really important in the coming year or two."
The scale of the problem is staggering: Over 1 in 5 videos recommended by YouTube's algorithm are AI-generated "slop," according to a Kapwing analysis of Social Blade data. Here's what marketers need to know.
AI-generated content is creating an environment where consumers must constantly question what's real, fundamentally changing how people interact with social platforms.
"You're having to think, is this real? Which is a bit of an issue," Fisher said. "The problem that creates for brands is that the more this kind of inauthentic content is proliferating on these platforms, how do they go about traversing this and ensuring that they're showing up in the right places?"
The consumer backlash is already measurable and it could make consumers gravitate toward brands they already know rather than being open to new digital experiences.
"I think that mistrust also would just make people want to gravitate towards brands they already trust and content they already trust versus being open to new types of digital experiences in general," said our analyst Jacob Bourne.
The technology's limitations in storytelling present another brand safety concern that may not be immediately obvious.
Analysts said people think AI-generated videos are worse at emotional storytelling quality than human-created content, according to recent data. This perception gap became visible when Coca-Cola faced significant backlash for using AI in its Christmas ads for the second consecutive year.
"In testing, it tested quite well. But then as soon as it launched last year before Christmas, before holiday period, huge negative sentiment," said Fisher.
Even when consumers aren't consciously aware they're viewing AI-generated ads, the lack of emotional resonance could create negative unconscious associations with brands over time.
"It could create negative unconscious associations with that brand over time," said Bourne. "And so I think that's also, it's a hindrance that could easily slip under the radar."
AI's ability to generate massive amounts of content at scale introduces operational risks that traditional review processes can't handle.
"You can make a bunch of different versions of an ad that's really personalized. But part of that means that it's harder to vet each of those variations," said Bourne. "Whereas in the past, you'd have your legal review teams review every ad before it went out. But now, with all these thousands, potentially, variations, you can't really do that."
Platform governance has also struggled to keep pace. The European Commission opened an investigation into X and its AI tool Grok earlier this year over circulation of indecent images, analysts said. YouTube recently relaxed rules around running ads next to sensitive content, including dramatized depictions of self-harm and sexual abuse.
Despite creating many of these problems, AI also provides tools to address them, particularly in contextual analysis and content verification.
"AI is really good at producing content at scale. It's really good at contextualizing and recognizing at scale as well," Fisher said. Modern AI can analyze tone, sentiment, and narrative context to help advertisers distinguish safe placements, going far beyond rudimentary keyword blocking.
Tools like Google's SynthID and Microsoft Video Authenticator use watermarking and detection technologies to identify AI-generated content. AI can also vet environments by analyzing social media posts, comments, and reviews to evaluate consumer sentiment and identify brand safety concerns in real-time.
Both analysts estimate AI is currently about 60% hindering and 40% helping brand safety efforts, though they expect this ratio to improve as solutions mature.
"I think over time, hopefully, assuming that the solutions can keep up with the power of AI and the problems that poses, that I think that that should be flipped," Bourne said.
This was originally featured in the EMARKETER Daily newsletter. For more marketing insights, statistics, and trends, subscribe here.
You've read 0 of 2 free articles this month.
685 Third Avenue21st FloorNew York, NY 100171-800-405-0844
1-800-405-0844[email protected]