How Cybersecurity Experts Can Use AI to Fight AI-Powered Cyberattacks
Generative AI could help security teams fend off hackers by turbocharging security controls and speeding up incident response.
Generative AI, proponents say, will make us all more productive. That certainly seems to be the case across many different use cases — including cybersecurity.
As we covered in a previous article, Unprecedented Escalation: How GenAI is Changing Cybersecurity AI tools have lowered the bar for entry into cybercrime, threatening to unleash a deluge of new cyberattacks. Even people with no technical savvy can launch shockingly effective phishing scams and build new malware strains with generative AI's help.
For their part, cybersecurity pros are fighting fire with fire. That is, they're using their own AI tools to defend their networks against AI-enabled adversaries.
Let's look at how cybersecurity pros can leverage AI for good.
AI Reinforces Traditional Security Controls
Many of the controls cybersecurity experts use, like firewalls and anti-virus software, rely on signature-based methods to detect cyberattacks. These tools maintain databases of signatures, telltale signs associated with certain attacks — like a piece of code known to appear in a specific type of ransomware. Signature-based controls compare network activity to their databases, raising alerts and taking action whenever they spot a signature.
The downside of signature-based detection is that it can't catch any new cyberattacks whose signatures have yet to be recorded. Now that hackers can use generative AI to create new malware strains with relatively easily, the number of never-before-seen attacks may increase significantly, and signature-based defenses may struggle to keep up.
Cybersecurity vendors have already taken steps to address this problem by introducing anomaly-based threat detection methods. Anomaly-based tools use AI and machine learning to build models of normal network traffic. Then, they compare network activity to this normal baseline. Anything that doesn't fit the model — in other words, any anomaly — is flagged.
It's worth noting that, historically, anomaly-based threat detection tools have been prone to false positives. However, as AI grows more sophisticated, these tools are getting better at distinguishing between actual attacks and new behavior from authorized users.
While anomaly-based controls will likely always require some human oversight, this AI-powered threat detection method can help prevent even AI-generated attacks. Hackers can no longer count on novelty alone to sneak into a network.
ABONNEZ-VOUS AU BLOG SKILLSOFT
Nous vous enverrons un e-mail lorsque nous publierons un nouvel article dans votre domaine d'intérêt.
AI Streamlines Incident Response
AI doesn't just help security analysts prevent cyberattacks — it helps them respond, too. Streamlining incident response may be AI's single biggest benefit for cybersecurity teams right now.
In most enterprise environments, when security controls flag a possible threat, they alert the security operations center (SOC). A SOC analyst then has to determine whether the threat is real, how serious it is, and what to do about it. To triage and investigate alerts, analysts have to pull data from various disparate internal and external sources, collate it all, and analyze it.
This process takes time. Highly complex or well-camouflaged threats could require hours of investigation. Even if analysts only need 10 or 15 minutes, in that timeframe, hackers can steal sensitive data and install malware to persist in the environment and further the attack. Cybercriminals can use AI to launch more attacks with less effort, so incident responders can't afford to spend more time investigating threats than they have to.
Here is where AI tools can help again. AI can automate the most time-consuming parts of incident investigations — i.e., collecting and collating relevant data from security controls, network analytics, and even external threat intelligence sources.
Some generative AI tools can even analyze the data and cut out much of the prep work by highlighting key points, prioritizing alerts, and suggesting possible responses. That way, security analysts can focus on higher-value tasks like intercepting and eradicating threats without sacrificing accurate, thorough investigations.
AI Is Efficient At Online Searching, When Used Correctly
Cybersecurity professionals need to know a lot to do their jobs. They must be deeply familiar with their company's tech stack, the latest cybersecurity tools, and best practices to face the ever-evolving cyberthreat landscape.
That's a lot of information to retain. For perspective, consider that 1500+ new security vulnerabilities are discovered every month on average. No one could reasonably track every single one of those.
That's why many security analysts spend a decent amount of time on search engines, researching new things or brushing up on old concepts. While search engines can yield the information they need, cybersecurity pros must sift through the results to find the most relevant sources.
Generative AI tools can help cybersecurity pros find answers a lot faster. Instead of simply listing sources, generative AI can distill it down to the key takeaways. With the correct prompts, security analysts can spend less time scrolling websites and more time acting on what they learn.
The potential of GenAI in this context is not just theoretical. A study by McKinsey indicates that GenAI can significantly enhance productivity and efficiency across various industries. In fact, they found that in two-thirds of the industry opportunities they evaluated, the application of GenAI can lead to notable improvements1. This underscores the transformative potential of GenAI tools, not only for cybersecurity professionals but across diverse sectors.
That said, cybersecurity pros need to be cautious. Generative AI has been known to hallucinate — essentially, make things up. Applying a sniff test to generative AI outputs is important before trusting them fully. Make sure the AI's insights square with prior knowledge and experiences. When in doubt, ask for sources so you can see for yourself where the AI is getting its information.
Analysts must also be judicious about the information they share with AI tools, especially if they use publicly available solutions like ChatGPT. Organizations can't control how these AI models might use proprietary company data, and there have been cases of some users' data being leaked to others. Organizations should consider creating their own proprietary generative AI tools or using ones designed by trusted vendors specifically for corporate use.
3 Tips on Using AI to Fight AI
1. Implement a Formal AI Policy and Training
Whether or not your organization has officially adopted generative AI, your cybersecurity analysts are probably using it, and more concerningly, the rest of your employees are, too. In order to ensure employees use GenAI properly, it's best to draft formal generative AI policies.
These policies should cover approved AI tools, situations in which employees can use AI and guidelines for how to use AI most effectively. The policy should also explicitly outline the types of company data that can and can't be shared with AI tools.
Train security analysts on generative AI so that they fully understand how these tools work and how they can use AI to fight cyberthreats. The more familiar analysts are with generative AI, the more skillfully they'll deploy it to defend the organization.
2. Emphasize the Human Touch
Generative AI is highly impressive but far from perfect. Security analysts can't simply follow its orders to use AI effectively and avoid its pitfalls. Instead, the analyst's role is to act as the AI's supervisor: issuing directions, evaluating outputs, and bringing their expertise to bear when the situation calls for it.
Analysts should feel confident to tweak or entirely disregard an AI's suggestions. Ultimately, generative AI works best as a productivity-enhancing tool, not a people-replacing tool.
3. Don't Neglect Traditional Controls
AI is a powerful tool in the fight against hackers today, but the old weapons still need to be updated. Some of the most basic security controls can offer strong defenses against AI-enabled adversaries.
For example, multifactor authentication (MFA) can keep hackers out of users' accounts. Even if cybercriminals use sophisticated phishing schemes to steal user credentials, they won't be able to get in if a second (or third) authentication factor is in use.
Similarly, a consistent patch management practice ensures systems are up to date and protected against the most pervasive malware, including AI-generated strains.
These speedbumps are often enough to dissuade would-be attackers, especially the newbies who have just gotten into the cybercrime game through AI.
Bracing for the Future
As AI advances, attackers and defenders will keep ramping up their usage of these tools. In the face of AI-enabled cyberthreats, the cybersecurity community must continue investing in and leveraging its own AI-powered defenses.
To fully harness the potential of AI in cybersecurity, organizations must weave AI into their formal processes and policies. Furthermore, security teams need access to ongoing training to stay on top of the latest developments in AI technology.
In the age of AI, cybersecurity pros who fail to adapt will be outflanked by hackers and their peers who embrace AI.
Learn how Skillsoft can help cybersecurity pros sharpen their AI skills and defend their networks from escalating cyberattacks.