The news: The Interactive Advertising Bureau (IAB) released its first AI Transparency and Disclosure Framework. It aims to curb deceptive uses of genAI in advertising and protect consumer trust as synthetic voices, images, and digital doubles make ads easier to fake.
- Its core idea is simple: Don’t label every AI-assisted asset—label the moments where AI could mislead consumers about what’s real.
- The framework recommends clear consumer-facing disclosures for AI use, backed by machine-readable metadata using C2PA standards to verify what’s synthetic versus what’s real.
Why it matters: The urgency for AI disclosure in ads is rising because media buyers are already leaning in.
- 61% of US digital media professionals said they’re excited to advertise within AI-generated content, and 45% said they evaluate those opportunities like any other media buy. Only 2% reject AI adjacency outright, per Integral Ad Science and YouGov.
- Meanwhile, IAB’s own research shows disclosure can reduce fallout: 73% of Gen Zers and millennials say clear AI disclosures would increase or not change their likelihood to purchase.
The challenge: The IAB’s framework, which is voluntary, only works when agencies choose to adopt it. Without platform mandates, audits, or penalties, AI-use disclosures can become inconsistent—applied by cautious brands but ignored by everyone else.
Even if disclosure doesn’t hurt purchase intent, research from the University of Gothenburg in Sweden finds “Made with AI” labels can reduce emotional engagement and perceived authenticity, adding friction and weakening response.
Implications for advertisers: Voluntary AI disclosure makes it safer to scale AI creative across channels, so synthetic assets don’t become a trust problem later.
IAB’s framework is a critical first step, but voluntary standards don’t become the norm until agencies operationalize them. Until then, AI disclosure will be uneven—careful brands will comply, while others free-ride on ambiguity.