The news: Meta’s AI app is drawing backlash as users unknowingly publish private chats—some serious—under real names due to a confusing share feature, per TechCrunch.
- Many people thought they were using the chatbot or saving notes in private, only to find that their prompts—which included topics like gender identity, medical concerns, tax evasion, and job interviews—were visible to strangers.
- Similar to previous data exposure blunders like the 2006 AOL search leak, the confusion is exacerbated by an interface that fails to distinguish between private chats and public posts.
Why it matters: For marketers, this is a warning sign for user trust rather than merely a warning about UI design.
- The foundation of Meta's AI goals is a business model that requires users to voluntarily share data. Customers may reduce engagement, restrict the sharing of personal data, or steer clear of Meta's new products completely if they believe that default settings and ambiguous controls are misleading.
- The ability of Meta to sustain the targeting accuracy that powers its advertising engine may be weakened by this erosion of trust—and when it comes to AI and data, it’s already low.