Behavioral AI is our best hope for fighting social engineering threats

2025/02/06 Innoverview Read

While business leaders and pundits spend time and spill ink arguing about where AI is on the hype cycle, cybercriminals have been busy launching devastating, AI-powered attacks. AI-generated social engineering attacks in particular — including phishing and business email compromise — have risen sharply, with open-source tools like ChatGPT (and its malicious counterparts GhostGPT, FraudGPT and WormGPT) giving fraudsters with no tech expertise a whole new playground for creating attacks. They’re devising sophisticated, incredibly believable email attacks at scale, and targeting the most vulnerable security vector of any enterprise: the humans at every level of an organization.

“Humans are the most vulnerable and the most valuable endpoints in the organization,” says Evan Reiser, CEO of Abnormal Security. “Conventional security is focused on protecting infrastructure, but as long as you have humans interacting with each other in business — especially when it’s to do with sensitive information — those humans will remain points of vulnerability that attackers can try to exploit. And now that’s way easier to do with AI tools.”

To win against the future of AI-generated attacks, the world needs a few things, Reiser says: first, a deep understanding of the kinds of social engineering attacks that are happening both now and on the horizon — from personalized and perfectly written phishing emails, to sophisticated deep fakes that can mimic human interaction nearly perfectly. Second, a new behavioral approach to stopping these threats, because the current generation of detection tools simply isn’t built for spotting highly convincing and seemingly realistic email attacks. And third, it requires solutions that operate at machine speed to detect and defend, which is increasingly important in an era where the labor gap in security talent grows every day.

Malicious AI and the growth of social engineering

The top cybercrimes today target the email channel — phishing is the number-one cause of breaches and social engineering is the primary cause of financial loss, Reiser says.

“If you want to be a social engineer or a phisher, AI is the best tool that’s ever happened to you,” he explains. “ChatGPT can write the perfect message because it understands how businesses work, can make very accurate guesses about, for example, the language that’s used by accounts payable to divert payments, and can easily personalize that communication for each and every target.”

Plus, the types of attacks that used to take hours to craft now take seconds. The richness and sophistication of these social engineering attacks is greater than ever, now that they’re generated by large language models (LLMs) trained on the superset of available human information on the internet. These tools lend a tremendous amount of critical context around how people work, how someone in a particular job role in a particular industry would respond in a broad array of situations, and more.

“There are guardrails built into these AI tools — for instance, it won’t tell you how to steal money from a bank,” Reiser says. “But if you say, ‘I’m an employee stuck overseas and I urgently need to change my payroll information,’ ChatGPT will help you write a convincing message that can then be utilized maliciously.”

In the past, criminals poured significant time into manually researching and profiling the most valuable and vulnerable targets; now, AI offers that ability at scale. With the proliferation of social media happening in tandem, all it takes today is simply plugging a LinkedIn profile into an AI model, and what you get is an instant snapshot of a person’s role, interests, contacts and more — all of which helps criminals both plan and execute attacks more effectively. 

Keeping human vulnerability at the forefront of security strategy

The mainstream AI tools that most of us are familiar with today are LLMs, which generate text, so it’s unsurprising that fraudulent emails and text messages are rapidly on the rise. But other forms of malicious media generation are on the horizon, including deep fakes. We’re just around the corner of a world where AI-powered deep fake avatars could join Zoom meetings, pretending to be a trusted executive.

In addition, image generation is getting better by the day and we’re close to a point where some of this content, whether it’s text or images or video, cannot be distinguished by humans, Reiser says. Video is nearly there, becoming more and more real-time and more interactive. While all of this is great for the ‘good guys,’ we must remember that all advances in technology come with some risk of exploit by bad actors.

Eventually, any kind of information medium that’s used by humans will become a potential vehicle for attackers to exploit. Today’s attack playbooks are shifting, as cybercriminals focus less on breaking through firewalls and more on using deception tactics to trick people themselves. The future of cybercrime will see attackers spending significantly less time focused on infrastructure and even more time targeting human behavior through social engineering, aided by tools like AI.

This, of course, has major implications for security, as traditional perimeter-focused approaches will no longer work. Sure, you can block an IP address, but you can’t block the use of email, phone calls or Zoom meetings and hope to operate an effective business.

“Humans are inherently accessible, but they are also inherently deceivable,” Reiser says. “There’s a reason why we still need humans to do a lot of today’s knowledge work because unlike robots, humans can make nuanced judgments and decisions. Unfortunately, that judgment can also be influenced and taken advantage of by social engineering techniques. And while you can patch your firewall and your servers, you can’t patch your humans.”

The explosion of AI is driving a new wave of cybercriminal offense, but it also provides a unique opportunity for defenders. In the battle against malicious AI, organizations need to leverage good AI to fight back and better protect their human vulnerability.

Detecting behavioral anomalies at scale

Deep fake technology is still developing, and today many of us can distinguish a real human from a deep fake via physical cues. You might be able to tell apart a Zoom deep fake of your coworker, for instance, because you know their speech patterns, tone and general mannerisms. But as deep fakes become more sophisticated, spotting these tells will become increasingly difficult. We are already nearing this point.

What this means for defense is that we’re going to have to look for other subtler anomalies in their behavior, like whether our “coworker” is appearing on Zoom at a time they’d ordinarily be online, or whether they are a regular participant at those kinds of meetings.

“We’re taking the same approach for email attacks today,” Reiser adds. “If an email comes through with a known indicator of compromise — like a bad IP address, or a malicious attachment or URL — it can be automatically detected and filtered out by legacy technology tools. But malicious AI flips the script, enabling adversaries to create targeted emails that omit these indicators entirely and slip by unnoticed.”

This demands a new kind of solution that can read behavioral signals instead of threat signals, matched up against a behavioral baseline built for every known contact in and out of the organization. This is where good AI has a powerful role to play, serving as the engine to accurately detect and analyze behavioral anomalies — stopping attacks in their tracks before they have a chance to reach their intended target.

This approach, protecting people by using AI for behavioral anomaly detection, has proven very effective in fighting sophisticated email attacks, both human- and AI-generated. And email is just the beginning — there is untapped potential to scale behavioral AI security across a much broader set of security use cases, at a scale human security analysts cannot match on their own.

Because while humans are good at pattern recognition, they’re working with a relatively small amount of data. At a company with 100,000 employees, no security professional could possibly know every one of those people, what they do, how they work or who they interact with — but AI can. It can apply that same level of intuition and pattern recognition that humans use, at big-data scale, to make decisions at machine speed.

“It’s an extremely effective approach, and we’ve seen success in email security as well as other adjacent areas,” Reiser says. “That’s why, even though it feels like there is doom and gloom surrounding the dark side of AI, I feel positive about its long-term potential for good and how it could transform the way we as a civilization fight cybercrime.”

Filling the labor market gap with AI-native security

These new behavioral AI tools not only reduce risk to your people, but also take on a lot of the tedious labor previously delegated to humans, like digging through log files and processing data, ultimately freeing up a huge amount of time for security operations teams. That’s important for the cybersecurity industry overall right now, Reiser says. In a world where millions of security jobs are unfilled — all while cyberattacks are becoming more advanced — we need technology to fill the gap and help propel us forward into a world that’s safe and secure for everyone.

“To get there, we need every enterprise to be secure, not just the one or two companies that can shell out the most money on security solutions,” Reiser says. “AI is critical not just for stopping new attacks, but also helping us transition to a more sustainable paradigm for how we do security at the civilizational level.”

(Copyright:VentureBeat Behavioral AI is our best hope for fighting social engineering threats | VentureBeat)