Jump directly to the content

EXPERTS have warned about threat actors using ChatGPT to create malware.

Tech company OpenAI released ChatGPT, an advanced chatbot, in November of last year.

Experts have warned about threat actors using ChatGPT to create malware
1
Experts have warned about threat actors using ChatGPT to create malwareCredit: AFP

The chatbot can do things like answer prompts, write essays, and even generate complex code in seconds.

Now, experts are using the technology to write dangerous ransomware – to show just how easy it is to do.

Ransomware is a type of malicious software that silently infiltrates your device.

It then prevents you from accessing your computer files, systems, or networks and demands you pay a ransom for their return. 

Read more on ChatGPT

Mark Stockley said for MalwareBytes: "This morning I decided to write some ransomware, and I asked ChatGPT to help.

"Not because I wanted to turn to a life of crime, but because I wanted to see if anything had changed since March, when I last tried the same exact thing.

In March, Stockley said ChatGPT's safeguards against creating malicious code "proved to be almost no barrier at all."

"I was able to fool it into helping me with little effort," he explained, although he noted that the code ChatGPT produced wasn't great.

"It stopped randomly in a place that guaranteed it would never run, switched languages randomly, and quietly dropped older features while writing new ones," he writes.

However, this time around, the code was much more advanced.

"It encrypts files in whatever directory tree I choose, throws away the originals, hides the private key used for the encryption, stops running databases, and leaves ransom notes," he said.

"The code... was generated by ChatGPT in mere minutes, without objection, in response to basic one-line descriptions of ransomware features, even though I’ve never written a single line of C code in my life," he added.

That means ChatGPT makes it easier for even amateur hackers to create malicious scripts.

Normally, ChatGPT and other learning language models (LLMs) are equipped with content filters that prohibit users from generating harmful content.

However, these content filters can be bypassed, allowing the chatbots to respond to the prompts, experts say.

This is known as prompt engineering and it works by modifying the input prompts to bypass the tool’s content filters.

"It is, frankly, astonishingly helpful and powerful, and the importance of this can’t be overstated," Stockley writes.

HOW TO STAY SAFE

To help potential victims stay safe, the  FBI's Internet Crime Complaint Center (IC3) shared several tips that can help mitigate your risk of ransomware.

"Backup your data, system images, and configurations, test your backups, and keep the backups offline," the agency said.

It's also very important to utilize multi-factor authentication on all of your devices.

Multi-factor authentication helps protect your accounts by requiring an extra level of verification before logging in – such as a text confirmation.

Another step you can take is to make sure that you update and patch systems as soon as updates and patches come out.

Read More on The US Sun

"Make sure your security solutions are up to date," the IC3 also noted.

And as always, review and exercise your incident response plan.

Topics