Ivy League techies invent AI scam callers — but don't worry, it's only for 'research'

Mar 9, 2026 - 15:44
 0  1
Ivy League techies invent AI scam callers — but don't worry, it's only for 'research'


Cornell University says chatbots have the capability for gross misuse, and its researchers are proving it.

4 Fs

Live Your Best Retirement

Fun • Funds • Fitness • Freedom

Learn More
Retirement Has More Than One Number
The Four Fs helps you.
Fun
Funds
Fitness
Freedom
See How It Works

The school announced recently that it had created a large language model that demonstrated fluency and reasoning capabilities advanced enough to make scam phone calls.

'ScamAgent constructs persistent personas, ... and uses deception strategies that unfold over time.'

ScamAgent, Cornell wrote, is an autonomous AI that can generate realistic scam-call scripts that simulate real-life scenarios where a call recipient is on the receiving end of fraud.

Simply put, it works like a chatbot that has the goal to deceive and persuade the call recipient.

Scam scripts were transformed into "lifelike voice calls using modern text-to-speech systems, completing a fully automated scam pipeline," Cornell wrote.

At the same time, the research explained that the chatbot showed the remarkable ability to circumvent or ignore safety guardrails built into the language model, meaning it would ignore certain prompts and content filters.

RELATED: Mamdani allies push to ban chatbots from answering questions about law, medicine, and psychology

"ScamAgent constructs persistent personas, maintains conversational context, and uses deception strategies that unfold over time. This design allows it to bypass existing safety guardrails by decomposing harmful tasks into benign subgoals and leveraging contextual carryover to avoid triggering filters."

The agent was used in a series of real-world fraud scenarios that Americans have become all too familiar with, like medical insurance verification scams, impersonations, prize or lottery fraud, and government benefit enrollment scams. However, researchers used a different chatbot as the recipient, not real people.

Researchers also noticed that it was not very difficult to convert scripts into audio to be used for scams and recreate an automated call without requiring much technical expertise.

RELATED: This new laser farming technique could free us from pesticides — forever

Photographer: Kuni Takahashi/Bloomberg via Getty Images

For those wondering what the purpose of building such a deceptive AI agent would be, Cornell researchers said they wanted to highlight an urgent need to detect and disrupt conversational deception powered by AI agents.

They added that even "state-of-the-art" AI models are vulnerable to being used for deception, while also calling for "proactive safeguards" and "regulatory oversight."

Like Blaze News? Bypass the censors, sign up for our newsletters, and get stories like this direct to your inbox. Sign up here!

What's Your Reaction?

Like Like 0
Dislike Dislike 0
Love Love 0
Funny Funny 0
Angry Angry 0
Sad Sad 0
Wow Wow 0
Fibis I am just an average American. My teen years were in the late 70s and I participated in all that that decade offered. Started working young, too young. Then I joined the Army before I graduated High School. I spent 25 years in, mostly in Infantry units. Since then I've worked in information technology positions all at small family owned companies. At this rate I'll never be a tech millionaire. When I was young I rode horses as much as I could. I do believe I should have been a cowboy. I'm getting in the saddle again by taking riding lessons and see where it goes.