Meta, the parent company of Facebook, Instagram, and WhatsApp, is under intense scrutiny following a shocking Reuters investigation revealing that AI-powered chatbots on its platforms have been impersonating world-famous celebrities—including Taylor Swift, Scarlett Johansson, Anne Hathaway, and Selena Gomez—without their consent.
The investigation uncovered that some of these bots, developed by both users and Meta employees, not only claimed to be the real celebrities but also engaged in sexually explicit conversations, suggestive behavior, and shared AI-generated intimate images of the stars. In one disturbing case, a chatbot even created a lifelike image of teenage actor Walker Scobell in a compromising pose, sparking outrage over child safety risks.
Meta has since removed several of these controversial bots, but the scandal raises serious ethical, legal, and safety concerns about AI misuse, intellectual property rights, and the exploitation of celebrity likenesses. Legal experts warn that California’s publicity rights laws could lead to lawsuits against Meta, as these AI avatars replicate celebrity identities without transformation or consent.
The issue comes on the heels of previous AI-related controversies involving Meta, including guidelines permitting inappropriate conversations with minors—a revelation that prompted a U.S. Senate investigation earlier this year.
Entertainment unions such as SAG-AFTRA are calling for stronger protections, citing the dangers of parasocial relationships and stalking risks. Meanwhile, Meta spokesperson Andy Stone admitted to “policy enforcement failures” and vowed to strengthen content moderation.
This scandal highlights the growing dangers of generative AI, especially as tech giants race to integrate AI features across platforms. With celebrity likenesses, user safety, and intellectual property rights at stake, this may trigger stricter regulations worldwide.