Unprecedented escalation: How GenAI is changing cybersecurity

12 Oktober 2023 | Cyber Security & CISO Insights | 8 min read

AI can make everyone more productive — including hackers. Here's how cybersecurity professionals are adapting.

Phishing attacks are a numbers game. Scammers know most of their messages will go unread, caught in anti-phishing and spam filters, or correctly identified as malicious by their recipients. A small percentage of recipients will open a phishing email. An even smaller number will click its links or open its attachments.

So, phishers go big. If they email 100,000 people, and only 1% of those targets click a malicious link, that's still 1,000 victims potentially getting their data stolen or falling for other scams.

Scammers are always looking for new tools to help them scale up to hit more people with less effort. Imagine how many passwords a cybercriminal could net if they emailed a million targets — a billion, even.

Unfortunately, generative AI tools — the same ones many use to ramp up productivity in legitimate pursuits — could make these massive phishing campaigns a reality.

As a result, cybersecurity pros are facing an unprecedented escalation of cyber threats. At the same time, generative AI tools are changing how cybersecurity teams do their jobs, often in positive ways.

Let's look at how the rise of generative AI is reshaping the cybersecurity landscape for better and worse.

Generative AI makes hacking easier than ever

Social engineering attacks are some of the most prevalent and most potent cyberattacks. According to a report by Verizon, 74% of data breaches use social engineering tactics in some way. These attacks can easily cost organizations millions of dollars.

Social engineering is common because it doesn't require much technical savvy. The hardest part is crafting a believable story. Attackers often pose as well-known brands, but many targets can spot fraudulent emails from a mile away, often riddled with spelling and grammar mistakes a major business wouldn’t likely make.

In the age of ChatGPT, however, sloppy writing is a less reliable red flag. Criminals can craft more convincing, even flawless messages by using AI, instead of writing them from scratch. AI tools also enable hackers to translate their phishing emails into new languages, opening up whole new populations for scamming.

And attackers aren't limited to text. Generative AI tools can create fake videos, images, and audio to back up their schemes. See, for example, the 2019 case of an energy company CEO who got a call from the leader of his parent firm. The leader asked the CEO to transfer $243,000 to a supplier in Hungary, and the CEO dutifully complied. However, the CEO wasn't actually talking to his boss — he was talking to scammers who had used AI-powered audio technology to impersonate his boss.

Now that AI tools are more widespread and easier to use, we can expect these kinds of deep fake attacks to ramp up.

AI also makes it easier for hackers to build new malware. Even cybercriminals with only a passing familiarity with programming can use plain text prompts to prod generative AIs into whipping up custom malware. Most generative AI tools in the legitimate market have safeguards to prevent unscrupulous actors from creating malicious code. Still, hackers are already devising ways around these barriers and sharing tips on the dark web. Furthermore, they’re designing their own generative AI tools to help cybercriminals craft more effective phishing emails.

All of this to say: An explosion of new malware strains and social engineering scams may be right around the corner. It may even be starting already. Keeping up with cyber threats was difficult enough in the pre-AI days. How can cybersecurity pros stay ahead of the hackers now?

SUBSCRIBEN SIE SICH ZU UNSEREM BLOG

Wir senden Ihnen eine E-Mail, wenn wir einen neuen Beitrag veröffentlichen.

Cybersecurity’s role in the AI Era

As hackers gleefully embrace AI, the mood in cybersecurity organizations is often lukewarm. Many cybersecurity pros wonder if their jobs are at risk, as many knowledge workers in other fields do. But, generative AI is unlikely to replace cybersecurity teams.

The more likely scenario is that generative AI will enable cybersecurity pros to scale up their defenses, helping them manage the influx of attacks. The human touch — specifically, critical thinking and expert judgment — is vital in cybersecurity, and AI can't quite compete today.

Consider incident response processes as an example. When anomalies are detected by security controls like intrusion prevention systems (IPS) or intrusion detection systems (IDS), thorough investigation becomes necessary. This involves gathering pertinent data from various network sources, incorporating external threat intelligence, and analyzing the information to understand the situation.

While AI can automate certain aspects of this process, such as data collection and collation, security analysts mustn't rely solely on AI for drawing conclusions and deciding how to proceed. It's important to note that AI can produce hallucinations, and also has the potential to generate false information.

Even when the AI isn’t making things up, it doesn’t have a human analyst’s knowledge of the network, technology stack or business environment, so its recommendations are likely to be on the generic side. Cybersecurity pros need to use their own expertise and judgment to formulate the most effective incident response plans for their unique environments.

One could set AI rules for responding to very basic attacks, but letting AI react to situations with any level of nuance is bound to disrupt benign business activities. Indeed, this is a known drawback of cybersecurity tools that use machine learning models to identify attacks. They're prone to false positives and liable to interpret any new network activity as suspicious. An authorized user accessing a sensitive database for the first time will likely be treated as a hacker by a purely AI-powered tool.

Cybersecurity experts must analyze alerts and intel alongside their AI tools to mark those subtle distinctions between legitimate activity and actual threats. This way, they can keep hackers out while ensuring that valid users can do their jobs without interruptions.

As AI tools grow more sophisticated, cybersecurity pros' roles will only grow more strategic. They'll focus on drafting effective cyber risk management strategies and building robust defense-in-depth security architectures. In the trenches, they can leverage AI tools to detect, investigate, and respond to potential attacks faster.

That right there is the silver lining in all of this. Just as hackers can use generative AI to launch bigger and bolder attacks, cybersecurity pros can use the same tools to fight back efficiently. A single scammer may be able to fire off hundreds of custom malware strains, but now, a single cybersecurity pro can catch and disarm many of them, too.

Cybersecurity training must evolve

Social engineering, aka "human hacking," aims to manipulate people instead of breaking through technical security controls. Employees are a critical line of defense as the primary targets of phishing campaigns and the like. Anti-phishing controls and Spam filters can't catch every scam.

To help employees spot social engineering attacks, cybersecurity awareness training programs often emphasize looking for hallmarks like bad grammar and awkward English. As mentioned above, generative AI makes this method obsolete. In response, cybersecurity training needs a change in emphasis.

Employees need to know exactly how AI is changing the cybersecurity landscape. Training programs should teach employees how hackers use generative AI and why social engineering attacks are so much harder to identify.

Security training should teach employees to treat every message as suspicious. Employees should only interact with messages after establishing trust.

To establish trust, employees should first triple-check the sender's email address. Does the message actually come from the person it claims to come from? In particular, employees must look out for the creative misspellings scammers sometimes use to disguise themselves — like writing "tornsmith@totallyrealwebsite.net" instead of "tomsmith@totallyrealwebsite.net." The first address uses an "r" and "n" to mimic the "m" in "tom." At first glance, it can be convincing.

Of course, hackers can spoof or hijack real email accounts to send malicious messages, so confirming the email address is not enough to fully establish trust. Employees must also scrutinize the content of the message. Employees should have two questions in mind as they examine potentially suspicious messages: Am I expecting this message, and are the message contents typical?

For example, if an employee receives a text from the CISO asking them to urgently buy a few gift cards, that should give them pause. Gift card purchases aren’t usually the CISO’s purview, so such a message would be both unexpected and unusual.

Finally, employees should always adhere to their company’s policies and processes when responding to requests. Ideally, a company shouldn’t permit employees to take major actions, like transferring money, based on emails or phone calls alone.

If such actions are permitted, however, employees should follow additional steps to verify requests before complying. One way to do this is to use out-of-band channels of communication to confirm requests. So, if the CEO sends an email asking an employee to move a large sum of cash to a new bank account, the employee should call the CEO to confirm, instead of replying to the email. After all, that email could be coming from a compromised account.

That said, cybersecurity teams can't outsource the responsibility for catching every attack to employees. It's ultimately up to them to be aware of attacker tools, techniques and procedures, put the right controls in place, and leverage the right tools and tactics to protect their networks and users.

To that end, training must evolve for cybersecurity pros, too. Specifically, cybersecurity teams need to learn how AI is changing the field, how hackers are using it, and how they can use it to scale up their own efforts. We'll address this topic in more detail in an upcoming article — subscribe to our blog to receive updates.

Learn how Skillsoft can train cybersecurity professionalsto combat escalating cyberthreats in the era of generative AI.