Events & Resources

Learning Center
Read through guides, explore resource hubs, and sample our coverage.
Learn More
Events
Register for an upcoming webinar and track which industry events our analysts attend.
Learn More
Podcasts
Listen to our podcast, Behind the Numbers for the latest news and insights.
Learn More

About

Our Story
Learn more about our mission and how EMARKETER came to be.
Learn More
Our Clients
Key decision-makers share why they find EMARKETER so critical.
Learn More
Our People
Take a look into our corporate culture and view our open roles.
Join the Team
Our Methodology
Rigorous proprietary data vetting strips biases and produces superior insights.
Learn More
Newsroom
See our latest press releases, news articles or download our press kit.
Learn More
Contact Us
Speak to a member of our team to learn more about EMARKETER.
Contact Us

Spotify found to have several podcasts using AI voices to advertise drugs illegally

The news: A CNN investigation found several fraudulent Spotify podcasts using AI-generated voices to market illegal online drugstores selling prescription medications without medical authorization, breaking US law.

  • Podcasts with titles like “My Adderall Store” and “Xtrapharma.com” appeared readily in search results for related terms. CNN found at least seven promoting illegal drug sales among the top 100 results for “Adderall” and up to 20 within the top 60 for “Xanax.” Some had been online for months.
  • CNN sent a list of 26 fake podcasts marketing illegal drugs while posing as legitimate shows to Spotify. The platform promptly removed the content, with a spokesperson claiming that “[Spotify is] constantly working to detect and remove violating content across our service.”
  • But even after Spotify removed the podcasts last Thursday, other podcasts in the same vein were posted to the platform on Friday morning.

The bigger picture: The podcasts are raising questions about current content moderation capabilities as AI innovates faster than platform oversight mechanisms can evolve. AI makes it easier to mass-produce harmful content across platforms that aren’t yet equipped to immediately detect it—and this risk goes beyond the platforms hosting harmful material.

Brands advertising across these platforms face reputational risks when ads appear alongside or within harmful AI-generated content. An Adalytics report contended that current brand safety tools are insufficient for protecting brands, claiming that ads for major brands are often displayed alongside inappropriate content because of tools introduced to market before being fully developed and verified.

Our take: As AI matures, platform accountability will increasingly separate leaders from the rest. Advertisers may begin favoring platforms with clearer transparency, real-time moderation insights, and rapid response mechanisms for AI-related incidents. And as risks rise, brands could pivot from scale-focused programmatic buys to curated environments and premium inventory where content is more tightly controlled.

As AI makes moderation and vetting exponentially harder, advertisers will demand transparency and safeguards from platforms—and ensure they understand moderation processes—before risking brand exposure through ad spend.

You've read 0 of 2 free articles this month.

Create an account for uninterrupted access to select articles.
Create a Free Account