‘Threat Multiplier’: Experts Warn of Downside of Artificial Intelligence

Jul 16, 2025 - 14:28
 0  0
‘Threat Multiplier’: Experts Warn of Downside of Artificial Intelligence

At a House panel’s hearing Wednesday, policy experts expressed a grim assessment of the growing criminal uses of artificial intelligence.

Rep. Andy Biggs, R-Ariz., chairman of the House Judiciary Subcommittee on Crime and Federal Government Surveillance, opened the hearing by noting that “AI-enabled threats continue to evolve as bad actors use AI technology in a wide spectrum of criminal enterprises.”

The hearing comes after congressional Republicans decided against including a 10-year moratorium on individual state regulation of AI in the One Big, Beautiful Bill. In the absence of federal regulation of AI, states have already begun regulating the emerging technology through legislation. Texas in particular has restricted the use of AI to censor viewpoints on some websites and has required the right to opt out of AI-driven personal-data harvesting.

Andrew Bowne, a professorial lecturer in AI law and policy at George Washington University Law School, described how the emerging technologies associated with the term artificial intelligence were allowing criminals to commit crimes on a larger scale.  

“It accelerates traditional processes but also creates entirely new ones. When the task AI is used for is criminal or harmful, the nature of AI becomes a threat multiplier,” Bowne explained. 

Bowne cited computer vision systems, generative adversarial networks (GANs), and large language learning models (LLMs) as three major forms of technological advances that criminals are now using to exploit citizens. 

Bowne noted that computer vision systems “are used to automate surveillance, identify targets, and even harvest personal data from breached documents to support identity theft and fraud.”

“What enables real-time threat detection for public safety can be repurposed to stalk or blackmail individuals with chilling efficiency,” Bowne warned. 

The law school lecturer then detailed how criminals can harness audio and video online and weaponize it to extort individuals.

“GANs are capable of generating synthetic images, videos and audio better known in the public discourse as ‘deepfakes.’ These tools allow the impersonation of public officials and private citizens alike,” Bowne said, adding:

They engage victims in extended realistic conversations, target elderly people and vulnerable people in scams, or overwhelming financial institutions with thousands of tailored loan applications.

They’re also used to generate malicious code, making cybercrime accessible to individuals with no technical background.

Ari Redbord, the global head of policy at TRM Labs, a software company that provides security services, and a former assistant U.S. attorney explained that the hearing was important because the entire criminal ecosystem is shifting.

“When the marginal cost of launching a scam, phishing campaign, or extortion attempt approaches zero, the volume of attacks and their complexity will increase exponentially,” Redbord cautioned.

“We’re not just seeing more of the same. We’re seeing new types of threats that weren’t possible before AI. Novel fraud typologies, hyper-personalized scams, deepfake extortion, autonomous laundering,” the cybersecurity expert said. 

Still, Redbord warned against potential bans on artificial intelligence.

“The solution to the criminal abuse of AI is not to ban or stifle the technology. It is to use it and use it wisely. We must stay a step ahead of illicit actors by leveraging the same innovations they use for bad, for good,” he said.

The post ‘Threat Multiplier’: Experts Warn of Downside of Artificial Intelligence appeared first on The Daily Signal.

What's Your Reaction?

Like Like 0
Dislike Dislike 0
Love Love 0
Funny Funny 0
Angry Angry 0
Sad Sad 0
Wow Wow 0
Fibis I am just an average American. My teen years were in the late 70s and I participated in all that that decade offered. Started working young, too young. Then I joined the Army before I graduated High School. I spent 25 years in, mostly in Infantry units. Since then I've worked in information technology positions all at small family owned companies. At this rate I'll never be a tech millionaire. When I was young I rode horses as much as I could. I do believe I should have been a cowboy. I'm getting in the saddle again by taking riding lessons and see where it goes.