Yes, there's an AI hive mind, and it's making us dumber

Mar 14, 2026 - 12:28
 0  1
Yes, there's an AI hive mind, and it's making us dumber


A new paper finds that LLMs bend toward imitation, non-creation, and, despite requests for fresh takes, put out derivative conclusions.

4 Fs

Live Your Best Retirement

Fun • Funds • Fitness • Freedom

Learn More
Retirement Has More Than One Number
The Four Fs helps you.
Fun
Funds
Fitness
Freedom
See How It Works

The paper has some AI observers surprised, while others scramble for explanations. Simply put, the models trained on finite datasets could not originate anything of their own. Worse, all the models, whatever their external or corporate differences, wound up spewing almost the same results. The differences in input, apparently, made little difference in output.

“This research reveals a critical limitation in large language models,” said Yulia Tsvetkov, a lead researcher and author of the study. "Despite their diversity of architectures and training approaches, LLMs produce strikingly homogeneous outputs on open-ended queries, a phenomenon we termed the ‘artificial hivemind.’”

The limitations of the LLMs are baked into the facts of silicon and spirit.

"Hive mind," believe it or not, is being generous. The LLMs cannot synch in the telepathic sense we attribute to honeybees or ants. All they are capable of is recursion, rehashing their inputs. There is no reflection but that which has been entrained to the models. No wonder they all sound the same.

The group of researchers working at various academic centers, including the Paul Allen Institute for Artificial Intelligence at the University of Washington, Carnegie Mellon, and Stanford University, trained approximately 70 different LLM models on a dataset they dubbed “INFINITY-CHAT.”

The researchers posed 26,000 open-ended questions to the LLMs , breaking out “the different queries that users pose to language models into six high-level categories and 17 fine-grained subcategories such as problem solving or speculative and hypothetical scenarios,” according to their report. “Of the high-level categories, creative content generation (58%) and brainstorming and ideation (15.2%) were among some of the most common — emphasizing users’ reliance on LLMs for direct inspiration and thought.”

There’s another disturbing angle we might consider.

The limitations of the LLMs are baked into the facts of silicon and spirit. Their limitations are unalterable, and they will never achieve “consciousness,” merely simulating it at most. We shouldn’t expect much in terms of pure creativity. But what about the nutritive and psychic value of the material upon which the models were trained? Is part of the problem highlighted in the "Hivemind" study due to the human-made material upon which they were trained?

RELATED: Shock report reveals just how much Gen Zers and Millennials dislike AI

Shock report reveals Gen Zers and Millennials dislike AI ads more than ever, as executives double down Sakorn Sukkasemsakorn/Getty Images

A particular post on X.com flagged this study. It’s no exaggeration, nor is it meant to disparage the poster, as certainly he is simply following the incentives of our financialized social media conditions, but the post itself reads like LLM-speak. It uses the now-typical “it’s not A, it’s B” turn of phrase so often repeated by AI and those humans interacting with AI.

This effect of humans sinking into lexigraphical and semantic patterns displayed by LLMs was highlighted in another recent study, "Homogenizing effect of large language models on creative diversity." “While LLMs can produce creative content that might be as good as or even better than human-created content,” the report surmised, “their widespread use risks reducing creative diversity across groups of people.”

Viral catchphrases and shopworn cliches come and go. Not too long ago, you couldn’t turn on the radio or crack a news site without seeing the phrase “it turns out that,” shortly followed by “is a dumpster fire.” We have a dangerous, but also useful, in-built tendency toward imitation. But we have, while LLMs do not, a number of tethers back to reality, back to the visceral and the spiritual.

How much of everything we’ve been reading over the last few decades has already been vastly watered down or filtered through, first, the criteria of market competition; second, government coercion and outright censorship; and lastly, through the highly dramatic corporate homogenizing process referred to as consolidation?

The alarm surrounding this latest “Hivemind” study will die down. Perhaps the models will be rejiggered to allow for output more convincing to human observers. But the more critical question, concerning how our own deteriorating capacities for discernment may have contributed to the ways these machines were modeled, will remain uncomfortable. We should try to unravel the mysteries of our own recent degeneration by looking at ourselves first.

What's Your Reaction?

Like Like 0
Dislike Dislike 0
Love Love 0
Funny Funny 0
Angry Angry 0
Sad Sad 0
Wow Wow 0
Fibis I am just an average American. My teen years were in the late 70s and I participated in all that that decade offered. Started working young, too young. Then I joined the Army before I graduated High School. I spent 25 years in, mostly in Infantry units. Since then I've worked in information technology positions all at small family owned companies. At this rate I'll never be a tech millionaire. When I was young I rode horses as much as I could. I do believe I should have been a cowboy. I'm getting in the saddle again by taking riding lessons and see where it goes.