AI deepfakes are sparking a global crackdown after Grok’s explicit content scandal triggered UK probes and stricter safeguards--raising governance stakes for AI tools.
Prominent clinicians and healthcare experts report a growing trend of bad actors using AI to impersonate them online and push unsafe products or unreliable medical information, according to a recent New York Times article. AI deepfakes may further discourage doctors from having their images and voices online. Social platforms must reassure healthcare creators about how they detect AI-driven scammers, enforce impersonation policies, and respond swiftly to deepfake reports.
New York has enacted the first US laws requiring disclosure and consent for AI-generated performers and posthumous likenesses in advertising and entertainment. The measures mandate clear labeling when synthetic or digitally altered performers appear onscreen and require approval from estates before deceased individuals’ likenesses are used commercially. The laws sharpen a state–federal divide: President Trump has warned states against AI rules that could hinder US competitiveness, favoring a single national framework instead. For media companies, New York’s move creates immediate compliance obligations—and a preview of regulatory uncertainty ahead.
Meta withdrew from Media Rating Council (MRC) brand safety audits last week, just months after its accreditation was officially issued, per Adweek. Despite its other brand safety moves, Meta’s step away from the MRC indicates that advertisers are now navigating a digital ad landscape that necessitates investment in platforms without stringent brand safety protocols—requiring marketers to strengthen their own brand safety monitoring and verification processes.
Meta announced updates to its brand safety and suitability capabilities for Threads and Instagram this week as it looks to gain advertiser trust in its platforms amid regulatory scrutiny. The new restrictions are a double-edged sword. On one hand, advertisers will have increased confidence in their ability to appear next to safe content that doesn’t damage brand image. But on the other hand, reaching younger audiences helping drive growth could become more challenging and require nuance.
OpenAI’s Sora iOS app sparked a wave of creative excitement—and an equally fast wave of scams. Exclusive to iOS and the web, Sora quickly climbed to the top of Apple’s download charts last week. But within days, the App Store was swarming with fake “Sora” and “Sora 2” apps, many hastily rebranded to ride the surge in interest. Opportunists exploit the gap between trademark enforcement, app verification, and public awareness—turning brand equity into bait. Brands must act fast to secure trademarks, domains, and search terms tied to new launches or risk losing trust and revenues to copycats.
The news: Microsoft Advertising now enforces policy compliance at the asset level—ad headlines, descriptions, and images will be reviewed individually. If one element violates policy, the rest of the ad can stay live, as long as the minimum required approved assets remain, per MarTech. Key takeaway: Marketers should embrace modular creative strategies, ensuring each individual asset is in compliance. Build campaigns with redundancy in approved elements to maintain uptime, and monitor flagged assets to quickly respond and ensure ad integrity.
The news: A CBS investigation discovered hundreds of deepfake ads on Meta platforms promoting “nudify” apps that create sexually explicit content based on images of real people. The analysis of Meta’s ad library found at minimum hundreds of deepfake ads across Facebook, Instagram, Threads, Facebook Messenger, and Meta Audience Network. Our take: The rise of deepfakes on major platforms like Meta emphasizes AI’s potential to erode consumer trust and raise brand safety risks—forcing advertisers to navigate a growing gap between innovation and lagging safeguards.
AI fueled election confusion: Social platforms struggled to remove deepfakes and AI-driven misinformation during a contentious election year, but investment in moderation may dwindle now
Voice assistants fumble the AI revolution: Despite genAI advancements, Big Tech’s assistants face stalled growth and disinterest from older users—Gen Z and parents of Gen Alpha might be their saving grace.
Meta, TikTok, Snap, and X intensify efforts to counter foreign influence and AI-driven deepfakes. Their ability to protect election integrity faces critical scrutiny.
Reports of unauthorized AI personas, including those of deceased individuals, raise concerns about the platform’s oversight and the broader issue of consent in genAI
The failed breach signals growing cybersecurity threats for US AI firms as geopolitical competition over AI technology heightens.
With digital twins ready to revolutionize 24/7 sales, RepAI faces the dual challenge of customer AI distrust and ethical concerns.
The C2PA is a collection of representatives from companies including OpenAI, Google, Publicis, Microsoft, and Adobe with a goal to “make the world safe for generative AI,” according to Adam Buhler, steering committee member for the C2PA and executive vice president and head of creative technology at Publicis.
The ad industry’s concern about AI deepfakes grows: Integral Ad Science will measure the negative impact of generated fakes during a year of hefty election spending.
Taylor Swift is the latest victim of AI deepfakes: Faked explicit images of the star went viral online, prompting regulators to condemn AI.
Google will require strict disclosures from AI-generated political ads: The company announced the rule change ahead of the most expensive election season yet.
Study reveals Midjourney’s AI can be manipulated to produce racist and conspiratorial images. As 2024 elections approach, AI misuse raises serious concerns.
Powerful data and analysis on nearly every digital topic.
Become a ClientWant more marketing insights?
Sign up for EMARKETER Daily, our free newsletter.
Thanks for signing up for our newsletter!
You can read recent articles from EMARKETER here.