Talk to an expert
BLOG

The Future of AI in Cybersecurity: How to Plan Ahead for AI Disruption

By Elliot Anderson  |  February 19, 2024

Find out how AI is likely to impact the cybersecurity industry in the next decade. 

Artificial intelligence has been an integral part of the cybersecurity industry for several years now. However, the widespread public adoption of Large Language Models (LLMs) that took place in 2023 has brought new and unexpected changes to the security landscape. 

LLMs like OpenAI’s ChatGPT, Google’s Bard, and others have opened new capabilities — and new threats — across the global economy. Security leaders in every sector and industry will need to change their approach to accommodate this development. 

It’s almost certain that new AI-powered tools will increase the volume and impact of cyberattacks over the next few years. However, they will also enhance the capabilities of cybersecurity leaders and product experts. Lumifi’s Research and Development uses the latest AI tools to refine our MDR capabilities every day. 

These developments will likely occur at an uneven pace, typical of a global arms race. Cybercriminals may gain a temporary advantage at some point, only to be subdued by new cybersecurity deployments, and then the cycle will repeat. 

This volatile environment should inspire cybersecurity professionals to increase their AI proficiency. Individuals with broad experience, product expertise, and a successful track record will be highly sought after in the industry. 

What exactly do LLMs do? Cybersecurity use cases explained 

LLMs enable anyone to process large amounts of information, democratizing the ability to leverage AI. This offers significant advantages to people and organizations who want to improve the efficiency, intelligence, and scalability of data-centric workflows. 

  • For cybersecurity practitioners, that means more efficient vulnerability management, faster incident detection and response, and improved security alert handling. 
  • For cybercriminals, LLMs enhance exploit automation and vulnerability scanning capabilities while removing language barriers characteristic of many phishing attacks. 
  • AI systems themselves are susceptible to misuse and abuse. Unintentional data leakage risks will increase as AI becomes more commonplace. 

When the cybersecurity industry was dominated by hardware products, security leaders only changed products when the next version of their preferred hardware was available. Now, AI-powered software can update itself according to each individual use case, requiring security teams to continuously evaluate LLM systems for safety and compliance. 

Let’s look more closely at each use case and how it’s likely to evolve as AI technology advances. 

How new AI technologies will enhance cybersecurity workflows 

There are two major advantages to leveraging LLM capabilities in cybersecurity.   

  • Better detection workflows with more accurate data enrichment. High-quality labeled data are vital to AI-powered detection workflows, but good data is hard to come by. LLMs are good at gathering and synthesizing initial data from large datasets, allowing security teams to configure new detection rules without limiting their scope to strict field data. 
  • Better security alert and incident response automation. Talent scarcity is still a major problem in the cybersecurity industry, and LLM-powered automation can optimize workflows for individual analysts to increase their capabilities. 

These two benefits will certainly improve over time and lead to new AI capabilities for security teams. SOC analysts may soon be able to read thousands of incident response playbooks at once and identify security gaps and inconsistencies in near real-time.  

This will require the creation of a domain-specific cybersecurity LLM capable of contextualizing incident response playbooks at the organizational level. AI-powered SIEM platforms like Exabeam already provide in-depth behavioral analytics for users and assets, and in time we’ll see similar capabilities expanding into threat response and recovery workflows as well. 

Threat actors will leverage AI to break down operational barriers 

LLMs are invaluable for threat actors, especially when it comes to gaining initial access to their victims’ assets. By practically eliminating language, cultural, and technical communication barriers between people communicating, they’ve made it much harder for people to reliably flag suspicious content. 

Cybercriminals are already using AI to enhance and automate operations in four key areas: 

  • Automated phishing. Cybercriminals can now send out automated emails, texts, and other communications that mimic the style of legitimate organizations. These messages are easy to personalize at scale without telltale spelling and grammar mistakes. 
  • Social engineering. LLMs enable social engineering by providing accurate scripts for initial access brokers to follow. With no more than a few quick prompts, a cybercriminal can create a believable character with a realistic backstory designed to fool even the most diligent targets. 
  • Impersonation attacks. Impersonation attacks are becoming more commonplace, and not just using LLMs. AI voice-cloning technology is already capable of producing convincing deepfakes, and full video capabilities are right around the corner. 
  • Fake customer support chatbots. These already exist, but they haven’t yet become a widely popular threat vector. As legitimate organizations increasingly adopt AI chatbots into their support workflows, the risks of chatbot spoofing will increase. 

According to one report, phishing attacks have surged more than 1200% since ChatGPT was first released in November 2022. Credential phishing attacks have risen by an astonishing 967% in the same time frame. 

Adjusting to a security landscape dominated by AI means understanding its limitations 

It’s no secret that influential tech leaders and investors are pouring significant resources into AI. Some thought leaders warn that the emerging technology will change every aspect of our lives — going so far as to say we’re charging headfirst into an AI apocalypse fueled by the development of Artificial General Intelligence (AGI). 

While the technology is new, exaggerating the danger of disruptive technology is a familiar cycle. Plato was famously skeptical of writing, and 16th century Europeans destroyed printing presses out of fear. It’s normal to be anxious about new technology. 

 Like writing, printing, and every other technology before it, artificial intelligence has limitations. Security leaders who understand those limitations will be able to navigate the challenges of a society increasingly reliant on AI-powered technologies.  

  • Some limitations include lack of contextual awareness, high operating costs, perpetuated consensus and amplified bias, and lack of moral agency. Let’s explore these further.AI cannot grasp context independently 
  • Current AI models struggle to comprehend the broad context of situations users present to them. Some perform better than others, but none currently demonstrate the contextual awareness necessary to navigate real-life problems without guidance. 

Many tech leaders think this is an engineering problem and believe that eventually LLMs will contextualize information with human-like accuracy. 

This may not be true. We still don’t know how the human brain contextualizes information and articulates it into language. Contextualizing insight by combining data with real-world experience remains a task best-suited to human experts. 

1. AI-powered workflows are resource-intensive 

According to the International Energy Agency, training a single AI model uses more electricity than 100 US homes consume in a year. A typical ChatGPT query consumes 2.9 watt-hours of electricity — about the same amount of energy stored in a typical AA battery. 

By comparison, the human brain consumes about 300 watt-hours of energy per day. Yet it accomplishes significantly more during this time than even the most efficient LLMs. 

This suggests that there’s more to improving neural network performance than simply adding more nodes and introducing more parameters. It also places an upper limit on the feasibility of increasingly energy-intensive AI processes. At some point, the costs will outweigh the benefits. 

2. I models have difficulty contradicting consensus 

AI training models operate on consensus. If a significant majority of parameters suggest that a certain LLM response is likely to be correct, the LLM will confidently declare the corresponding answer. If the training set data is not accurate, the answer won’t be either. 

When it comes to pure facts, overcoming this limitation may be technically feasible. But when it comes to opinions, values, and judgements, AI-powered tools are not equipped to offer anything but the most basic responses. 

This means that even highly advanced future AI tools may not be able to make convincing arguments against popular consensus. It’s easy to see how this can lead to severe security consequences, especially in cases where popular wisdom turns out to be wrong. 

3. You can’t credit (or blame) AI models for the decisions they make 

AI ethics remains a challenging issue for technology experts, cognitive scientists, and philosophers alike. This problem is deeply connected to our lack of understanding of human consciousness and agency. 

Currently, there is no real consensus about the moral status of artificially intelligent algorithms. This makes it impossible to attribute moral decisions to AI-powered tools or claim they know the difference between “right” and “wrong”. 

We can’t treat AI algorithms as moral agents without also attributing some form of “personhood” to them. Most people strongly doubt that LLMs like ChatGPT are “people” in that sense, which means someone else must take responsibility for the decisions that AI algorithms make — including their mistakes. 

Where will AI take the cybersecurity industry? 

Security leaders are beginning to distinguish between generative AI and predictive AI. While people are understandably excited about generative AI, the true information security workhorse is predictive AI, which is a must-have technology in today’s security operations center environment. 

As the stakes of AI-powered cybercrime get higher, leaders will become increasingly risk averse. Few executives or stakeholders will be willing to risk their livelihoods on unproven security solutions and vendors. 

In this scenario, security leaders who entrust their detection and response workflows to reputable product experts with proven track records will be rewarded. If your detection and response provider doesn’t leverage proven AI expertise in its blue team operations, it will eventually fall behind. 

Positive security incident outcomes may become difficult to achieve, but guaranteeing them will be crucial. Learn more about how Lumifi achieves this critical goal by combining AI-enriched data with human expertise and best-in-class automation. Secure your spot for our webinar, Unveiling ShieldVision's Future & New Series of Enhancements, taking place on February 14th to learn more.  

Lumifi is a managed detection and response vendor with years of experience driving consistent results with the world’s most sophisticated AI technologies. Find out how we combine AI-enhanced automation with human expertise through our ShieldVision™ SOC automation service. 

  

By Elliot Anderson

Topics Covered

Share This

Subscribe for Exclusive Updates

Stay informed with the most recent updates, threat briefs, and useful tools & resources. You have the option to unsubscribe at any time.

Related Articles

SOC vs. SOC Webinar

Clearing the Confusion for Better Cybersecurity & Compliance

Learn More.
Privacy PolicyTerms & ConditionsSitemapSafeHotline
magnifiercrossmenuchevron-down linkedin facebook pinterest youtube rss twitter instagram facebook-blank rss-blank linkedin-blank pinterest youtube twitter instagram