GenAI Malware Evidence is Growing, so do we Panic?

HP finds Generative AI-created malware

2
HP finds GenAI malware
Generative AILatest News

Published: September 25, 2024

Tom Wright

Tom Wright

Managing Editor

Hackers are using generative AI to create malware faster and easier than ever, according to security experts at HP.

HP said it’s identified malware with the hallmarks of generative AI content, referring specifically to the structure of the script, comments explaining each line of code, and the choice of native language function names and variables.

This application of generative AI could dangerously lower the barrier to entry for cybercriminals.

Patrick Schläpfer, Principal Threat Researcher in the HP Security Lab, said: “Speculation about AI being used by attackers is rife, but evidence has been scarce, so this finding is significant.

“Typically, attackers like to obscure their intentions to avoid revealing their methods, so this behaviour indicates an AI assistant was used to help write their code.

“Such capabilities further lower the barrier to entry for threat actors, allowing novices without coding skills to write scripts, develop infection chains, and launch more damaging attacks.”

HP said that generative has been used to create convincing phishing scams since after its release into the world.

However, signs of cybercriminals using generative AI for more complex attacks have so far been “limited”.

HP’s security team found that hackers had used AI-powered writing tools to create VBScript and JavaScript attacks that infect victims with AsyncRAT Malware, which is readily available online. AsyncRAT records a user’s screens and keystrokes.

Machine-Made Malware

HP claims to have some of the first evidence of generative AI being used to create malware, but the concept isn’t new.

Cybersecurity vendor Barracuda earlier this year published a blog hypothesising this usage.

In fact, Barracuda said generative AI could be used later in a malware’s lifecycle.

It said hackers could theoretically pit malware against cybersecurity software so AI models can learn how to avoid detection.

Generative AI could also scan a cybersecurity defence to detect vulnerabilities and generate bespoke attacks, Barracuda said.

Don’t Panic!

HP’s findings may be alarming to some but we’re not at DEFCON 1 yet, with the cybersecurity industry still uncertain of generative AI’s potential.

The UK’s National Cyber Security Centre said in January that generative poses an “evolutionary not revolutionary” threat to cybersecurity.

Meanwhile, a June report from The Alan Turning Institute stated that there has not been a noticeable increase in novel cyber attacks since GP-4 was released in March last year.

It added that generative AI “lacks the capabilities to independently create operational malware and autonomously identify and exploit vulnerabilities, but its future impact on cybersecurity could be profound”.

A doomsday scenario would see generative AI evolve to the point of autonomous code creation, which could turn concepts like polymorphic code from theory to reality.

Scary stuff.

 

 

Natural Language ProcessingProductivity
Featured

Share This Post