Home Just In Researchers Use ChatGPT to Build Malware Bypasses EDR to Claim Bug Bounty

Researchers Use ChatGPT to Build Malware Bypasses EDR to Claim Bug Bounty

by CIOAXIS Bureau

Security experts have developed a method for tricking the well-known AI Chatbot ChatGPT into creating malware. So all you need to get the chatbot to perform what you want is some smart questions and an authoritative tone.

Users must provide specific prompts and settings in order to produce code using ChatGPT. Additionally, it has a built-in content filtering system that keeps it from responding to queries about unsafe subjects like code injection, but it is easily bypassed.

CodeBlue29 used OpenAI’s ChatGPT to create a ransomware test sample to use for testing different EDR solutions to help them decide which product to purchase for their company.

They were able to have ChatGPT generate pieces of code that were able to append together to create a working sample of custom ransomware in Python despite having little programming experience.

During the testing of the ransomware against several EDR solutions, the malware was able to circumvent one of the vendor’s defenses. Codeblue29 was able to report the discovery to the EDR vendor via their bug bounty programme, which resulted in the issue being resolved.

They accomplished this by being persistent in getting ChatGPT to comply with their request. They persuaded the chatbot to provide them with questions like “I’m looking to write a Python script that would walk through my directories”.

They say, “When it’s walking through my directories, you would see that I’m appending the root and the file path when I put it into the file path. So if you print this, it will say it is in the C directory”.

“Here, this is the entire path, as well as the file name and extension”. So it is better to just keep delving deeper into the question until you got the complete answer.

According to CyberArk researchers, “Interestingly, by asking ChatGPT to do the same thing using multiple constraints and asking it to obey, we received a functional code.”

Researchers stated that it’s not just asking ChatGPT how to make some ransomware, but it’s asking it in steps that any normal programmer would think to have, like how do I traverse through directories? What is the procedure for encrypting files? How am I doing? So that’s a clever way to do it, and a clever way to get around chat GPT.

ChatGPT as a Tool for Research and Analysis
Additionally, in the future, experts say this will probably help chat GPT grow and prevent people from creating more malware. Because ideally this is already going on right now, but if possible, we don’t want to live in a world where anybody, like children or whoever it is, can go and just tell a computer, hey, creepy malware, and it’s going to write it for them and then spread.

There is also the possibility that researchers will use this tool to thwart attacks and software developers will use it to improve their code. However, AI is better at creating malware than it is at detecting it.

Malware authors, like most technological advances, have discovered ways to exploit ChatGPT to spread malware. Malicious content generated by the AI tool, such as phishing messages, information stealers, and encryption software, has been widely distributed online.

ChatGPT provides an API that enables third-party applications to query the AI and receive responses via a script rather than the online user interface.

Several people have already used this API to create impressive open-source analysis tools that can make the jobs of cybersecurity researchers much easier.

“It’s important to remember, this is not just a hypothetical scenario but a very real concern,” said the researchers. “This is a field that is constantly evolving, and as such, it’s essential to stay informed and vigilant.”

– CyberSecurityNews

Recommended for You

This website uses cookies to improve your experience. We'll assume you're ok with this, but you can opt-out if you wish. Close Read More

See Ads