Ransomware attacks appeared to fall briefly throughout 2022, but quickly reversed that trend in the first half of 2023. According to Chainalysis, payments to ransomware attackers are on track to reach the second-highest annual total on record.
Multiple factors have combined to lead the industry here, but an explosion of interest in generative AI is the most notable development of the first half of 2023.
Nearly every technology sector stands to undergo dramatic change as large language models like OpenAI' GPT-4 become commonplace – and the ransomware industry is no different.
Ransomware groups have already proven to be highly organized, well-funded criminal enterprises. They consistently cultivate specialist talent to leverage emerging technologies, and generative AI is the next item on every ransomware group' wish list.
Some of the capabilities that large language models and other AI technologies provide to cybercrime groups include:
Lower and Slower Attacks. AI-enhanced camouflage techniques make it much easier for a malicious application to mimic normal behaviors and blend into a tech stack without drawing suspicion.
Chatbot Negotiators. Poor communication and negotiation skills traditionally hold back ransomware representatives. This disadvantage will disappear entirely when automated chat solutions trained in professional negotiation techniques take over.
Automated Kill Radius Expansion. AI-powered automation can expand the scope of what ransomware attacks can achieve. They may infect more of victims' networks, move more quickly through those networks, and target a much larger number of assets.
Personalized Phishing Attacks at Scale. Generative AI is well-suited to creating highly personalized phishing content drawn from scraped social media profile content. Social engineers can now impersonate trusted contacts using believable language drawn straight from their victims' real personalities.
Faster Codebase Changes. Hackers can leverage AI to change malware code and recompile exploits automatically. This can make their attacks more efficient and scalable while reducing the effectiveness of signature-based detection solutions.
Security leaders can't rely on traditional security tools to address the threat of AI-enhanced ransomware.
Organizations need to adopt a more flexible approach that addresses the vulnerabilities that AI-enhanced ransomware techniques exploit.
There are multiple ways security leaders can prepare for a new generation of ransomware threats:
Cybersecurity experts have long considered AI the principal weapon in an "arms race" between developers and cybercriminals. Many of the new AI-powered capabilities ransomware groups invest in can only be reliably countered by equally sophisticated AI-powered security solutions.
SIEM platforms equipped with User Entity and Behavioral Analytics can reliably detect when users deviate from their established routines. Security teams must update these systems with custom rules designed to detect malicious workflows powered by AI.
Every endpoint device is a potential entry point for AI-enhanced cyberattacks. This is especially true for organizations that rely on large numbers of remote and hybrid workers and schools and universities serving student populations.
Advanced XDR solutions allow security teams to leverage in-depth incident response-level automation. Organizations can neutralize threats and block unauthorized processes the moment they appear, rather than waiting for a lengthy manual investigation to conclude.
Threat intelligence will become increasingly important as the number and sophistication of emerging threats increases. When hackers can modify and recompile malicious code on the fly, organizations will need dynamic solutions for identifying and categorizing threats more efficiently than what generic threat intelligence feeds are capable of today.
Curated threat intelligence solutions like Anomali ThreatStream have a vital role to play in this AI-powered cybercrime environment. By filtering and prioritizing threat intelligence data, Anomali can improve the data that security professionals use to identify and mitigate emerging threats.
Cybercrime groups already know that social engineering is one of the most effective ways to infiltrate an organization. Phishing and credential-based attacks only stand to become more dangerous and difficult to detect when enhanced with generative AI.
Employees at every level must understand the threat that these new capabilities represent. They need to rely on robust policies guiding how to deal with phishing messages that are indistinguishable from the real thing. Multi-factor authentication must become a mindset that goes beyond login credentials to include phone calls and text messages from trusted contacts.
Lumifi specializes in SIEM implementations that leverage the latest AI-enhanced technologies to reliably detect sophisticated cyberattacks that would otherwise go unnoticed. Contact a specialist to learn how we can help you mitigate the risk of malicious insiders, credential-based attacks, and AI-powered ransomware today.