AI ransomware is a real threat, just not a realistic one yet
The rise of generative AI is underway, and experts in all fields debate the real consequences, what kind of disruptive chances are approaching, and how fast these should be expected. In what concerns ransomware and malware authoring in general, large-language machine learning models that can aid with computer programming are already being abused for creating risky code that can cause harm to other people’s computers, but are we there yet?
AI ransomware is still a tentative idea
The quick answer is no. Despite a concept created by IBM Research named “DeepLocker,” AI-powered ransomware does not exist in the wild yet, and nobody has been hit by anything that resembles a machine-made strain.
Despite ChatGPT’s input restrictions to prevent creating dangerous code using OpenAI’s tool, researchers experimenting with it managed to make it create something with ransomware-like functions, including asymmetric encryption and ransom note creation.
However, the ChatGPT-written ransomware strain is unlikely to hit the cybercrime underworld, as the result isn’t better than what can be achieved with other publicly available tools that are free and easy for script-kiddies to acquire on the dark web, hacker forums, or even code repositories like GitHub.
Even if someone invested more time and effort into convincing or tricking ChatGPT into improving its ransomware and making it more potent, there are too many limitations in the AI tool that make it far too unreliable and cumbersome to use for creating a functional and performant strain.
What would AI attacks look like?
Although the current limitations keep AI-powered ransomware from hitting primetime, this is not expected to continue for much longer. Experts believe that it is only a matter of time before other language or coding models that are more open to creating malicious tools are launched, or rogue firms and research teams create something without ethical considerations.
For starters, AI-powered ransomware could help replace the manual labor of ransomware attacks, including:
- Conducting pen-testing on target networks and systems to identify potentially exploitable security flaws and exploit them to plant a backdoor/shellcode.
- Creating polymorphic cross-platform binaries that constantly change their code to evade detection.
- Developing worm-like propagation capabilities for spreading to computers and networks with different protections.
- Adapting social engineering attacks based on preliminary data collected through automated scraping and OSINT research, achieving better initial compromise and network intrusion rates.
Experts in the field warn that ransomware gangs have accumulated dizzying wealth over the years, so they’re among the first who can afford cutting-edge AI tools that would help them automate and perfect their attacks as much as possible.
Some even suggest that ransomware gangs would be willing to hire AI/ML experts and pay them multiple times their current salaries to develop custom tools built specifically for the purpose of launching ransomware attacks.
As for when all that is estimated to appear in the wild for the first time, experienced experts in cybersecurity believe that the predictions should manifest in 6-12 months.
MonsterCloud is monitoring the situation closely and works with partners to evaluate how AI can help with defense and data restoration, always striving to stay a step ahead of malicious actors.
Sources:
- https://www.malwarebytes.com/blog/news/2023/03/chatgpt-happy-to-write-ransomware-just-really-bad-at-it
- https://www.protocol.com/enterprise/ai-ml-ransomware-cyberattacks-automation
- https://www.techtarget.com/searchsecurity/tip/Does-AI-powered-malware-exist-in-the-wild-Not-yet
- https://i.blackhat.com/us-18/Thu-August-9/us-18-Kirat-DeepLocker-Concealing-Targeted-Attacks-with-AI-Locksmithing.pdf