Events & Resources

Learning Center
Read through guides, explore resource hubs, and sample our coverage.
Learn More
Events
Register for an upcoming webinar and track which industry events our analysts attend.
Learn More
Podcasts
Listen to our podcast, Behind the Numbers for the latest news and insights.
Learn More

About

Our Story
Learn more about our mission and how EMARKETER came to be.
Learn More
Our Clients
Key decision-makers share why they find EMARKETER so critical.
Learn More
Our People
Take a look into our corporate culture and view our open roles.
Join the Team
Our Methodology
Rigorous proprietary data vetting strips biases and produces superior insights.
Learn More
Newsroom
See our latest press releases, news articles or download our press kit.
Learn More
Contact Us
Speak to a member of our team to learn more about EMARKETER.
Contact Us

AI governance pressure mounts following Grok deepfake fallout

The news: Elon Musk-owned platform X is introducing restrictions to its Grok AI model in response to backlash over the tool generating explicit deepfakes, including sexual images of children.

  • The recent Grok controversy resulted in a UK investigation and broader international pressure to prevent AI tools from being used to create and edit explicit images of real people without consent.
  • X stated that it will now include technological safeguards to prevent all users from using the @Grok account to edit photos of people in revealing clothes. The platform is also geoblocking the ability to generate these images in places where it violates law.
  • While UK regulatory authority Ofcom called Grok’s new restrictions a “welcome development,” the investigation on whether Grok violated UK laws is proceeding.

Grok’s past: Complaints over Grok’s lackluster content moderation capabilities are mounting. Grok stirred controversy in July when it generated antisemitic outputs, which led to the tool being briefly restricted to image generation. As a result, Turkey banned Grok and EU regulators pushed for stricter AI chat regulations.

The broader issue: Grok’s mishaps have been highly scrutinized, but the latest scandal highlights the broad problem that AI tools make it easy for users to produce harmful and explicit content.

  • 58% of US adults are “very concerned” about the spread of misleading video and audio deepfakes as a negative consequence of AI, higher than any other concern, per YouGov. Another 27% are “somewhat concerned.”
  • Recent legislative action reflects consumer concerns. New York enacted a law in December targeting disclosure of AI content in ads, while a companion law requires consent from heirs or executors before a deceased individual’s likeness can be used commercially.

Implications for marketers: AI tools frequently lack the security measures required to protect against harm. Ongoing volatility with tools like Grok could make marketers interested in piloting AI programs shy away without clear information about a tool’s transparency and safety guardrails.

The reputational risk associated with deepfake content could outweigh Grok’s current technical appeal to marketers, like plans to integrate ads into responses. Marketers must remain vigilant and prioritize platforms that provide AI tools with clear safety and governance measures.

You've read 0 of 2 free articles this month.

Get more articles - create your free account today!