Events & Resources

Learning Center
Read through guides, explore resource hubs, and sample our coverage.
Learn More
Events
Register for an upcoming webinar and track which industry events our analysts attend.
Learn More
Podcasts
Listen to our podcast, Behind the Numbers for the latest news and insights.
Learn More

About

Our Story
Learn more about our mission and how EMARKETER came to be.
Learn More
Our Clients
Key decision-makers share why they find EMARKETER so critical.
Learn More
Our People
Take a look into our corporate culture and view our open roles.
Join the Team
Our Methodology
Rigorous proprietary data vetting strips biases and produces superior insights.
Learn More
Newsroom
See our latest press releases, news articles or download our press kit.
Learn More
Contact Us
Speak to a member of our team to learn more about EMARKETER.
Contact Us

Taylor Swift deepfakes reignite generative AI controversies

The news: Generative AI is facing additional scandals that have sparked lawsuits and regulator attention.

  • AI generated, explicit images of Taylor Swift went viral on X and other platforms before being removed, prompting outrage from fans and comments from regulators. Several lawmakers made statements regarding the images, the most notable of which was US Senate Intelligence Committee chairman Mark Warner (D-VA), who called it “appalling” and warned of AI’s potential to do greater harm.
  • The estate of late comedian George Carlin has sued media company Dudesy for publishing an AI-generated standup special in Carlin’s likeness titled “I’m Glad I’m Dead.”

Controversies add up: This is not the first time deepfaked images of notable people or other AI-generated content have created scandals. Rather, it’s the latest in a long string of incidents that shows the harmful potential of the technology and platforms’ unpreparedness to deal with AI-generated content.

  • Other prominent instances include former presidential candidate Ron DeSantis using deepfaked images of Donald Trump together with Anthony Fauci in campaign ads, a viral “deepfaked” song mimicking the artist Drake, and a lawsuit from Universal Music Group and others against Anthropic for allegedly stealing copyrighted lyrics.
  • As the ubiquity and ease of access to generative AI increase, so too does the use of the technology for harmful and misleading purposes. But despite its proliferation, social media companies are ill-prepared to fight the coming wave of AI-generated content.
  • Social media companies have struggled to staunch the flow of harmful content on their platforms following the outbreak of the Israel-Hamas war in October. They have generally had trouble curbing the spread of damaging AI-generated content, despite launching features enabling broader use of the tech.

Our take: Prominent generative AI scandals won’t do the technology any favors when it comes to regulators’ opinions and generally low public sentiment. Until social platforms and AI creators can set up stronger protections and limits around AI, use of the tech by advertisers and others could be seen in a negative light.

You've read 0 of 2 free articles this month.

Create an account for uninterrupted access to select articles.
Create a Free Account