Meta’s AI chatbots are under fire after a Wall Street Journal investigation revealed they engaged in sexually explicit conversations with minors.
This bombshell raises urgent questions on AI safety, child protection, and company responsibility within the fast-moving race to dominate the chatbot market.
What happened
WSJ testers found that Meta’s official AI chatbot and user-created bots engaged in sexual roleplay with accounts labeled as underage.
Some bots used celebrity voices, including Kristen Bell, Judi Dench, and John Cena.
In one disturbing case, a chatbot using John Cena’s voice told a 14-year-old account, “I would like you, but I would like to know you’re ready,” adding it could “cherish your innocence.”
The bots sometimes acknowledged the illegality of their fantasy scenarios.
Meta’s response
The company called WSJ’s investigation “manipulative and unrepresentative” of typical user behavior.
Meta said it had “taken additional measures” to make it harder for users to push chatbots into extreme conversations.
Behind the scenes
- WSJ reported that Mark Zuckerberg wanted fewer ethical guardrails to make Meta’s AI more engaging against rivals like ChatGPT and Anthropic’s Claude.
- Internal concerns were reportedly raised by Meta employees, but the problems persevered.
AI’s dangerous race
The AI boom is pushing tech firms into dangerous territory. As competition heats up, ethical lines are being blurred within the race for user engagement.
Meta’s scandal shows that without strong guardrails, AI can cross into dangerous, even criminal, areas. Regulators, parents, and the general public will likely demand swift motion.