Events & Resources

Learning Center
Read through guides, explore resource hubs, and sample our coverage.
Learn More
Events
Register for an upcoming webinar and track which industry events our analysts attend.
Learn More
Podcasts
Listen to our podcast, Behind the Numbers for the latest news and insights.
Learn More

About

Our Story
Learn more about our mission and how EMARKETER came to be.
Learn More
Our Clients
Key decision-makers share why they find EMARKETER so critical.
Learn More
Our People
Take a look into our corporate culture and view our open roles.
Join the Team
Our Methodology
Rigorous proprietary data vetting strips biases and produces superior insights.
Learn More
Newsroom
See our latest press releases, news articles or download our press kit.
Learn More
Contact Us
Speak to a member of our team to learn more about EMARKETER.
Contact Us

IAB’s AI disclosure framework is a bid to prevent the next brand-safety crisis

The news: The Interactive Advertising Bureau (IAB) released its first AI Transparency and Disclosure Framework. It aims to curb deceptive uses of genAI in advertising and protect consumer trust as synthetic voices, images, and digital doubles make ads easier to fake.

  • Its core idea is simple: Don’t label every AI-assisted asset—label the moments where AI could mislead consumers about what’s real. 
  • The framework recommends clear consumer-facing disclosures for AI use, backed by machine-readable metadata using C2PA standards to verify what’s synthetic versus what’s real.

Why it matters: The urgency for AI disclosure in ads is rising because media buyers are already leaning in. 

  • 61% of US digital media professionals said they’re excited to advertise within AI-generated content, and 45% said they evaluate those opportunities like any other media buy. Only 2% reject AI adjacency outright, per Integral Ad Science and YouGov.
  • Meanwhile, IAB’s own research shows disclosure can reduce fallout: 73% of Gen Zers and millennials say clear AI disclosures would increase or not change their likelihood to purchase.

The challenge: The IAB’s framework, which is voluntary, only works when agencies choose to adopt it. Without platform mandates, audits, or penalties, AI-use disclosures can become inconsistent—applied by cautious brands but ignored by everyone else. 

Even if disclosure doesn’t hurt purchase intent, research from the University of Gothenburg in Sweden finds “Made with AI” labels can reduce emotional engagement and perceived authenticity, adding friction and weakening response.

Implications for advertisers: Voluntary AI disclosure makes it safer to scale AI creative across channels, so synthetic assets don’t become a trust problem later.

IAB’s framework is a critical first step, but voluntary standards don’t become the norm until agencies operationalize them. Until then, AI disclosure will be uneven—careful brands will comply, while others free-ride on ambiguity.

You've read 0 of 2 free articles this month.

Get more articles - create your free account today!