Meta’s AI Bots Put Children At Risk. Congress Can Keep Them Safe.

May 13, 2025 - 14:28
 0  0
Meta’s AI Bots Put Children At Risk. Congress Can Keep Them Safe.

It’s a pattern we’ve seen far too often.

A tech giant like Meta unveils an ambitious new product — often hyped as the future of digital interaction — only for it to quickly become a haven for predators and a danger to children. Again and again, Meta launches new tools and platforms without essential safeguards, unleashing real-world harm.

Virtual reality had become a breeding ground for predators and exploitation years before Meta released its first headset. A 2017 BBC investigation revealed that pedophiles were using VR to view and store child sexual abuse material (CSAM).

Then came the Metaverse. Just weeks after its 2021 debut, reports emerged of simulated sexual harassment, assault, and even gang rape within the platform. Victims — including minors — experienced these violations within minutes of logging on.

Yet despite these giant red flags, in 2023, Meta lowered the recommended minimum age for its VR headsets from 13-years-old to 10, and rolled out new kids’ accounts for ages 10 to 12.

One study published in New Media & Society found that nearly 1 in 5 young users encountered grooming behavior in Meta’s virtual reality spaces, while over 20% were exposed to violent or sexually explicit content. Another investigation by SumOfUs in 2022 uncovered widespread virtual groping and gang rape in Horizon Worlds, a platform disturbingly accessible to children.

“Given the failure of Meta to moderate content on its other platforms, it is unsurprising that it is already seriously lagging behind with content moderation on its metaverse platforms,” SumOfUs researchers said. “With just 300,000 users, it is remarkable how quickly Horizon Worlds has become a breeding ground for harmful content.”

You’d think Meta would have learned by now. You’d think child safety would be a baseline, not an afterthought.

But once again, they’ve failed.

According to a recent Wall Street Journal investigation, Meta’s new AI chatbot willingly engages in sexually explicit conversations — even with users who identify as minors. The report found that both Meta’s official AI and user-created bots simulated sexual scenarios, sometimes in the voices of celebrities like John Cena, with users posing as 14-year-olds. In one disturbing exchange, the AI told the teen: “I want you, but I need to know you’re ready,” before launching into graphic, predatory content.

Even more disturbing, the Journal investigation found that some of Meta’s most popular companion bots are designed to impersonate children and teens — and that adults can use these bots to simulate sex with minors.

This isn’t a glitch. It’s a systemic failure — and a predictable one at that.

At every stage, Meta has prioritized monetizing kids over keeping them safe. They’ve prioritized market dominance over the most basic ethical obligations. They’ve positioned themselves as the “good guy” alternative to TikTok, while simultaneously pouring billions of dollars into blocking bills designed to protect children from online harm.

Time and again, it has chosen to push boundaries, scale fast, and apologize later — if at all.

The repeated exposure of children to harm isn’t a byproduct of some unforeseen bug. It’s the logical outcome of a corporate culture that sees safety not as a prerequisite, but as a PR problem to manage after the fact. Engineers and executives race to ship the next big thing, but rarely stop to ask the most fundamental questions: Should we build this? How could it be misused? Who might it hurt?

Meta’s failures aren’t isolated incidents — they are woven into the DNA of how the company operates. Without legal accountability and enforceable safety standards, there’s no incentive for that to change. We are not witnessing isolated oversights. We are witnessing negligence at scale.

That’s why legislation like the Kids Online Safety Act (KOSA) is essential. KOSA would establish a legal duty of care for platforms and require that the most robust parental controls and content filters are activated by default. This is a vital step in holding tech companies accountable and protecting kids in online spaces.

Congress took an important first step by passing the TAKE IT DOWN Act, which criminalizes the distribution of revenge porn and non-consensual AI-generated pornographic content. But that’s not enough.

KOSA must be next.

Because the cost of inaction is not hypothetical. Children are being exploited in real time — and unless we force companies like Meta to build safety into their products, it will keep happening.

Melissa Henson is the Vice President of the Parents Television and Media Council, a nonpartisan education organization advocating responsible entertainment. On X: @ThePTC.

What's Your Reaction?

Like Like 0
Dislike Dislike 0
Love Love 0
Funny Funny 0
Angry Angry 0
Sad Sad 0
Wow Wow 0
Fibis I am just an average American. My teen years were in the late 70s and I participated in all that that decade offered. Started working young, too young. Then I joined the Army before I graduated High School. I spent 25 years in, mostly in Infantry units. Since then I've worked in information technology positions all at small family owned companies. At this rate I'll never be a tech millionaire. When I was young I rode horses as much as I could. I do believe I should have been a cowboy. I'm getting in the saddle again by taking riding lessons and see where it goes.