The trend: Prominent clinicians and healthcare experts report a growing trend of bad actors using AI to impersonate them online and push unsafe products or unreliable medical information, according to a recent New York Times article.
- Dr. Robert Lustig, a well-known endocrinologist, was targeted by a scammer who used AI to make it appear like he endorsed an unapproved liquid weight loss medication.
- Dr. Gemma Newman, a UK family physician and author, warned her Instagram followers that an AI-altered TikTok video falsely showed her selling women on vitamin B12 capsules containing 9,000 milligrams of beetroot.
- Dr. Christopher Gardner, a renowned nutrition scientist, had his voice impersonated by AI to create YouTube videos spreading faulty medical advice, particularly to older consumers.
Why it matters: Deepfake “doctors” generate millions of views on social media, according to a recent warning from the American Medical Association. And campaigns that use doctors’ likenesses and voices without their consent sometimes remain up even after being identified, per the NYT. Meanwhile, unproven health products featuring AI-generated fake clinician endorsements have appeared for sale on Amazon and Walmart—and even as sponsored ads in Google search results.
The fallout: Vulnerable consumers risk purchasing potentially dangerous supplements, while doctors question whether creating health content on social media is worth the hassle. For instance, Garnder told the NYT that he’s now wondering if all the online educational videos he’s created about nutrition are merely an opportunity for scammers to impersonate him for their benefit.
Implications for social media companies: Social media companies want physician influencers to post content on their platforms to draw a significantly growing number of consumers who turn to social media for health information.
But many doctors are already hesitant to put themselves out there on social media—the AI risks only heighten their uncertainty.
- About 45% of doctors who don’t post on social media say the risks outweigh the rewards, per a recent Inlightened survey. The risks of AI impersonations may further discourage them from having their images and voices online.
- Social platforms must reassure healthcare creators about how they detect AI-driven scammers, enforce impersonation policies, and respond swiftly to deepfake reports.
Implications for pharma marketers: Pharma marketers must conduct their own social monitoring, since they likely know better than the platforms which health supplements and medical advice are intentionally deceptive. Doing so will help earn the trust of concerned doctors (some of whom could be potential prescribers or content partners) and consumers who need help identifying impersonation red flags.
This content is part of EMARKETER’s subscription Briefings, where we pair daily updates with data and analysis from forecasts and research reports. Our Briefings prepare you to start your day informed, to provide critical insights in an important meeting, and to understand the context of what’s happening in your industry. Not a subscriber? Click here to get a demo of our full platform and coverage.