The news: The 2024 US election aimed a spotlight on AI’s potential to confuse users with hallucinations and enable bad actors to spread disinformation.
- Although social platforms such as Facebook, Instagram, TikTok, and Snap dedicated significant resources into managing election-related misinformation, tech layoffs led to reduced safety and content moderation teams.
- Oversight paid off for some of the biggest players: Meta reported that AI content accounted for less than 1% of all political, election, and social misinformation on its platforms.
Zooming out: Platform owners’ investments into election protection were significant.
- Meta said it has invested more than $20 billion in election safety and security since 2016.
- TikTok said it expected to spend about $2 billion on trust and safety by the end of 2024, including for election integrity.
A real threat: Research from Microsoft showed that Russia, China, and Iran accelerated cyber interference attempts shortly before the November US presidential election.
A greater problem came from manipulated deepfakes of political figures and the failure of content filters to identify misleading or false information.
- In June, a BBC probe found TikTok’s algorithms were recommending deepfakes and AI-generated videos of global political leaders making inflammatory statements.
- Meta banned political advertisers from using its genAI tools but still allowed political ads on its Facebook and Instagram platforms.
What were the stakes? 46% of adults ages 18 to 29 use social media as their main source for political and election news, per the Pew Research Center. However, only 9% of people older than 16 are confident in their ability to spot a deepfake within their feeds, per Ofcom.
In some cases, it was the chatbot itself that generated false information, rather than a user prompting it to create misinformation. In September, xAI’s Grok chatbot briefly responded to election-related questions with incorrect information about ballot deadlines.
Our take: Now that the election has come and gone, it’s unclear if social media platforms will continue to place such a sharp focus on content moderation.
TikTok is already choosing to swap human moderators for automated systems and, if more platform owners cut safety teams to tackle AI development expenses, AI-generated misinformation could become a constant risk for users.