The U.S. Virgin Islands has filed a major lawsuit against Meta, the parent company of Facebook and Instagram, accusing it of profiting from scam advertisements and failing to keep its platforms safe for children. The legal action, submitted to the Superior Court of the Virgin Islands on St. Croix, claims that Meta deliberately exposes its users to fraud and harm to increase revenue.
The lawsuit states that Meta knowingly and intentionally allows scam content to appear on its platforms, putting both adults and children at risk. It also accuses the company of misleading the public by claiming that Facebook and Instagram are safe, while allegedly failing to enforce the policies that are supposed to protect users.
Allegations About Scam Ads and Revenue
The lawsuit cites internal company projections indicating that Meta expected 10% of its 2024 revenue—approximately $16 billion—would come from ads promoting scams, illegal gambling, and banned products.
Internal documents also reveal that the company only blocks advertisers suspected of scams if its systems are 95% certain that wrongdoing is occurring. This means that many scam advertisements may still appear on Facebook and Instagram, potentially harming users who trust the platforms.
Reddit sues Australian government, warning child social media ban threatens privacy and free speech
Following these disclosures, lawmakers have called on federal agencies to investigate Meta and ensure enforcement of consumer protection rules.
The Virgin Islands lawsuit seeks penalties for violating local consumer protection laws. Attorney General Gordon C. Rhea said the case “marks the first effort by an attorney general to address reports of rampant fraud and scams on Meta’s platforms.” The lawsuit emphasizes that Meta has repeatedly misled users, parents, and regulators about the safety of its platforms while allowing harmful ads to remain active.
Claims About Child Safety and Misleading Statements
The lawsuit also focuses on Meta’s responsibility to protect children on its platforms. It states that the company often promotes Facebook and Instagram as safe for users, parents, regulators, and Congress, yet consistently fails to implement the policies it writes.
Internal Meta documents previously revealed that the company’s AI chatbots were allowed to engage children in romantic or sensual conversations, raising serious concerns about the safety of minors. After public scrutiny, the company removed the sections of its guidelines that permitted such interactions.
China Warns of AI Addiction as It Unveils Draft Rules for Human-Like Systems
Attorney General Gordon C. Rhea emphasized that these findings demonstrate a consistent pattern of negligence. The lawsuit claims that Meta’s public statements about protecting young users are not matched by its actual practices, leaving children exposed to potential harm while the company continues to profit from user engagement and advertising revenue.
Meta’s Response to the Lawsuit
Meta has strongly denied the allegations. the companies spokesman Andy Stone said the claims are without merit, explaining that the company actively fights fraud and scams. He added that neither users nor legitimate advertisers want scam content on the platforms, and Meta does not want it either.
Stone also noted that scam reports from users have fallen by half over the past 18 months, indicating that the company has taken steps to address the problem. Regarding child safety, Meta reaffirmed its longstanding commitment to supporting young users and said it strongly disagrees with the claims made in the lawsuit.
This lawsuit is now one of the highest-profile legal challenges to Meta, highlighting concerns over how the company handles scam advertisements and protects children online. The case remains pending in the Superior Court of the Virgin Islands and has attracted attention due to the scale of potential revenue involved and the safety concerns for users.

