The news: Elon Musk-owned platform X is introducing restrictions to its Grok AI model in response to backlash over the tool generating explicit deepfakes, including sexual images of children.
- The recent Grok controversy resulted in a UK investigation and broader international pressure to prevent AI tools from being used to create and edit explicit images of real people without consent.
- X stated that it will now include technological safeguards to prevent all users from using the @Grok account to edit photos of people in revealing clothes. The platform is also geoblocking the ability to generate these images in places where it violates law.
- While UK regulatory authority Ofcom called Grok’s new restrictions a “welcome development,” the investigation on whether Grok violated UK laws is proceeding.
Grok’s past: Complaints over Grok’s lackluster content moderation capabilities are mounting. Grok stirred controversy in July when it generated antisemitic outputs, which led to the tool being briefly restricted to image generation. As a result, Turkey banned Grok and EU regulators pushed for stricter AI chat regulations.
The broader issue: Grok’s mishaps have been highly scrutinized, but the latest scandal highlights the broad problem that AI tools make it easy for users to produce harmful and explicit content.
-
58% of US adults are “very concerned” about the spread of misleading video and audio deepfakes as a negative consequence of AI, higher than any other concern, per YouGov. Another 27% are “somewhat concerned.”
- Recent legislative action reflects consumer concerns. New York enacted a law in December targeting disclosure of AI content in ads, while a companion law requires consent from heirs or executors before a deceased individual’s likeness can be used commercially.
Implications for marketers: AI tools frequently lack the security measures required to protect against harm. Ongoing volatility with tools like Grok could make marketers interested in piloting AI programs shy away without clear information about a tool’s transparency and safety guardrails.
The reputational risk associated with deepfake content could outweigh Grok’s current technical appeal to marketers, like plans to integrate ads into responses. Marketers must remain vigilant and prioritize platforms that provide AI tools with clear safety and governance measures.