Keeping States From Governing AI Helps Big Tech and Harms Kids

Jun 28, 2025 - 07:28
 0  0
Keeping States From Governing AI Helps Big Tech and Harms Kids

Today’s push for a 10-year artificial intelligence moratorium has been framed in legislation as a “temporary pause” on state AI and algorithmic regulation. Proponents claim this is necessary to prevent “an unworkable patchwork of disparate and conflicting state AI laws” and preserve America’s lead over China in AI.

But this ignores a more immediate and dangerous reality: The moratorium would unnecessarily paralyze the most promising and urgently needed online reforms in America today—those designed to protect children online.

Over the past two years, states have led the nation in passing innovative laws to combat various exploitative uses of AI like deepfake child sexual abuse imagery or manipulative recommendation algorithms. Thirty-eight states have updated their child sexual abuse material laws to include AI-generated content.

If the moratorium becomes law, though, the remaining 12 states would be blocked from enacting similar protections if they accept new or reobligated Broadband Equity, Access, and Deployment program funding—a $42.45 billion program all 50 states submitted proposals for funding from.

Perhaps even more concerning, all 50 states could be barred from enforcing existing laws under the same conditions.

This moratorium is not a symbolic gesture of unity—it carries real, enforceable consequences. It threatens to claw back the entirety of a state’s BEAD funds, including already-obligated dollars, unless the state agrees to halt both the enactment and enforcement of any law regulating AI or “automated decision systems.”

That latter term is defined so broadly it sweeps in virtually all modern algorithms, which form the backbone of nearly every online platform and digital interaction. As a result, states could be blocked from implementing laws on age verification, content moderation, algorithmic recommendations, and “nudifying” and voice-cloning apps—precisely the tools being used to target and harm children online.

Many of the most innovative child safety laws in the U.S. today began in the states.

Arizona’s HB 2175 ensures medical insurance claims aren’t completely outsourced to algorithmic tools by requiring insurance medical directors to review claim denials. Tennessee’s ELVIS Act protects artists—including minors on platforms like Instagram—by banning AI-generated replicas of their voices without their consent.

New York’s SAFE for Kids Act requires platforms to obtain parental consent before subjecting minors to addictive algorithmic feeds or overnight access. Florida recently passed a law barring under-14s from social media altogether—a response to mounting data on how platform design harms mental health.

Supporters of the moratorium argue that you do not need laws that address technology to govern it. These laws are but a few examples of how kids would pay the price for that wishful thinking.

States have also taken the lead on age verification. Twenty-four states have passed laws requiring adult websites to verify the age of users. Utah and Texas have enacted laws that apply this logic to app stores, requiring age-appropriate design and parental consent.

These policies are narrowly tailored, data-minimizing, and often bipartisan—precisely the kind of innovation Congress claims to want but has so far failed to deliver. Outside of Sen. Ted Cruz’s bipartisan TAKE IT DOWN Act, which addresses nonconsensual deepfake images, there is no comprehensive federal AI child safety law. Yet under the moratorium, states would be barred from filling this gap.

Supporters of the moratorium claim it prevents states like California from setting national rules for AI. But in practice, California—with a $325 billion budget—can afford to forgo BEAD funds and continue regulating AI. It’s smaller, often red, statessay one with a $4 billion budget—that can’t.

The result is a perverse dynamic where progressive states are free to impose EU-style rules, while conservative states are blocked from passing more balanced, locally grounded protections.

This isn’t theoretical. The states being handcuffed by the moratorium are the same ones that banned TikTok on government devices long before Congress acted. They are the states creating the political and policy momentum necessary for federal change. Removing that power means removing the engine of American AI accountability—and sacrificing children’s safety in the process.

The AI moratorium doesn’t stop China. It stops states like Texas, Tennessee, and Utah. It doesn’t defend innovation. It defends Silicon Valley from scrutiny. And it doesn’t protect kids. It protects Big Tech.

The post Keeping States From Governing AI Helps Big Tech and Harms Kids appeared first on The Daily Signal.

What's Your Reaction?

Like Like 0
Dislike Dislike 0
Love Love 0
Funny Funny 0
Angry Angry 0
Sad Sad 0
Wow Wow 0
Fibis I am just an average American. My teen years were in the late 70s and I participated in all that that decade offered. Started working young, too young. Then I joined the Army before I graduated High School. I spent 25 years in, mostly in Infantry units. Since then I've worked in information technology positions all at small family owned companies. At this rate I'll never be a tech millionaire. When I was young I rode horses as much as I could. I do believe I should have been a cowboy. I'm getting in the saddle again by taking riding lessons and see where it goes.