Impact of AI on Cyberattacks in 2024

Thomas Le Coz

Thomas Le Coz

AI Attacks

Video

Content

  • Introduction
  • AI Attacks: Myth or Reality?
  • Initial Access
  • The Impact of AI on the Kill Chain
  • New Threats: BlackMamba, Malware & AI
  • New Threats: RedReaper for Offensive Data Mining
  • The Case of Social Engineering
  • Phishing and Its Variants
  • AI Enables Scaling Up
  • AI Enhances the Quality and Realism of Attacks
  • The Case of Deepfakes Against Businesses
  • The Case of Deepfakes Against Democracies
  • Demonstration of Attack by Voice Cloning
  • How to Protect Yourself? Defense in Depth

Video Transcript

Introduction

My name is Thomas. I'm the co-founder of Arsen. We specialize in simulating attacks. We can come back to the subject, but basically, if we don't simulate attacks on humans, it's extremely complicated to assess their level of resilience and to train them. Our job is to track threats, reproduce them, and create training circuits to train people to react better. But I'm not here to talk about what we do. I'm here to talk about AI and AI-related attacks.

Clarifying "AI"

I don't know about you, but we're hearing AI just about everywhere at the moment. There's a lot of AI washing. So, there are two things: one, I'm going to put the church back in the middle of the village a little bit, and two, I'm going to try to be precise. I'm probably going to inadvertently use the term AI and generative AI interchangeably. The fact is that the progress we've seen today in AI and the massive adoption we've seen is essentially linked to progress in generative AI. We've been using generative AI for a very long time already in the field of cybersecurity.

The first anti-spam filters are already quite old. We've been using AI-based classifiers. So, is this myth or reality? Have we really seen the attack landscape change since the advent of AI? It's been a good year since ChatGPT reached maturity and adoption. Have we seen any changes?

State of the Initial Access

If we look today, and I've spoken with some of you who run SOCs and have a good knowledge of initial attack vectors, according to the MIT classification (IBM X-Force report), the three most popular vectors are valid account, phishing, and public-facing apps.

  • Valid account: Means there's been a data leak. Access is available on the dark web, sometimes through phishing, sometimes quite successfully through info stealers (programs dedicated to extracting identifiers).
  • Phishing: It's the classic email that extracts confidential information.
  • Public-facing apps: This has taken off quite a bit this year, notably due to vulnerabilities in exposed applications that have enabled people to enter systems without any human interaction. It's a little less my field, but it's not to be neglected.

For all the marketing that says 90% of errors come from humans, you need to put a little water in your wine. Humans are not the only culprits in today's cyber attacks.

I'm showing you this because I don't expect that next year we'll have a new line marked AI. I don't see us having a new initial access vector based solely on AI. We didn't create Skynet today for Terminator fans.

Offensive usage of AI

On the other hand, AI is still used, according to various threat reports. It's used to generate content, pretexts, and in social engineering, having the right vocabulary, words, and language. This is a real issue when foreign attackers target people whose language is not the attacker's native language.

AI is used to find vulnerabilities and exploit them. ChatGPT is excellent for helping you develop an exploit related to a CVE (Common Vulnerabilities and Exposures) that has just been discovered. It's also used for information retrieval. Large data models trained on a lot of data can look deep into the web, even the deep web that a superficial Google search wouldn't get you. It's used to find precise information and for information mining.

The attacker today has access to lots of data. Generative AI is very good at structuring unstructured data and extrapolating and synthesizing massively available information. Basically, AI is a productivity tool for all of us. For those already using it, or those starting to use it, we can see the productivity gains it can create. But it's above all a productivity gain on the current stages of the kill chain—the different stages that lead to a cyber attack. It's not a kill chain per se; it's not an attack vector per se.

I'm going to present two cases because you have to be wary of what can be created from variations of things we already know, updated in the age of AI.

Black Mamba

I'm going to tell you about Black Mamba, which is a malware project with an AI version. Black Mamba is an initially harmless program that arrives on your systems without any malicious behavior. As far as behavioral detection is concerned, it's very difficult to see anything because it won't correspond to a malware signature. Anything based on signature detection won't flag it either. It comes in innocuously, seen as a standard tool.

Black Mamba does just one thing: it makes an API call to OpenAI. Basically, it calls OpenAI, sends a prompt, and OpenAI sends back some information. This prompt is designed in such a way that OpenAI sends back malicious code. From a network security point of view, there's nothing alarming because many productivity tools make connections to OpenAI today. So, there won't be any particular alert on the network filtering part. The behavior is just a program making an OpenAI request like many others today, except that OpenAI returns malicious code. The harmless program has enough to rewrite itself; it's polymorphism, something we've known about for a long time. It rewrites itself by reinterpreting the malicious code, compiles, and transforms itself into malware.

In this prototype, the malware performs keylogging and information extraction via Teams. So, what was previously harmless turns into malware capable of intercepting keystrokes and extracting information from channels like Teams. This example is a bit of a game-changer: harmless program, polymorphism, and malware are nothing new, but generative AI changes the game here.

Red Reaper

Earlier, I mentioned information mining, and that's where the Red Reaper case comes in. This is another program that leverages generative AI to its advantage. The principle here is: how do you, as an attacker, exploit a massive data leakage? This proof of concept was developed using the Enron emails. For those who remember, Enron was a major American scandal, and the company went bankrupt. A researcher bought all the recoverable Enron emails for $10,000, totaling 600,000 emails of archived discussions. The question then becomes: what can we do with this data? Initially, it was used for graph research and similar purposes.

The Red Reaper project processes these emails in four stages:

  1. Filtering: The first step filters out anything related to legal matters and persons of authority, including discussions involving the presidency, boards, and lawyers. This reduces the number of emails from 600,000 to about 20,000.
  2. Graph Analysis: The second step involves graph analysis to identify the central points of communication and understand authority relationships. This helps to pinpoint the nerve centers of information and who has access to sensitive information.
  3. Identifying Relevant People: The third step is identifying the most relevant individuals based on the authority and communication patterns.
  4. Synthesizing Information: Finally, the fourth step uses generative AI to synthesize these elements and understand the underlying information. This process uncovered cases that had not been exploited but could have been used for blackmail, such as histories of unfair dismissals and other issues swept under the rug during the Enron scandal.

The practical application today involves account takeovers or business email compromise. By accessing a mailbox and its conversation history, which is often stored in the cloud, attackers can exploit this data to identify targets with the power of transfer, pending invoices, or cases of confidential information for espionage or blackmail operations. This application of AI once again reshuffles the deck.

Social Engineering and AI

Now, let's talk about social engineering, which is my area of expertise. Social engineering involves any attempt to compromise or induce unwanted actions in the human factor—employees in the broadest sense of the word, including top executives. Phishing is just a symptom, a vector, an application of social engineering that we're most familiar with. This can involve email (phishing), instant messaging (smishing), or voice calls (vishing).

The underlying principles are the same: manipulate people, create emotional reactions, and exploit cognitive biases to elicit non-rational responses. Compliance training often focuses on theoretical knowledge, like not clicking on links or opening attachments. Unfortunately, people behave differently in practice and aren't necessarily rational when caught up in manipulations.

Scaling Social Engineering with AI

AI allows social engineering attacks to scale, increasing both the volume and sophistication of attacks. There's an interesting report from Abnormal Security showing the beginning of ChatGPT's adoption, the flagship of generative AI commoditization. The number of attacks received increased drastically with the arrival of generative AI. Several factors contribute to this:

  • Instantaneous Translation: Works very well, and Japan, which was relatively spared, saw a big increase in attacks following the arrival of generative AI.
  • Email Generation: AI's ability to write emails on the fly with more relevant pretexts has increased the effectiveness of phishing emails.

Another interesting statistic shows a 1,265% increase in attacks between the last quarter of 2022 and the third quarter of 2023. This illustrates how AI allows for both increased volume and improved quality of attacks. Spear phishing, which requires more resources and sophistication, is a more targeted attack that hurts more.

Types of Social Engineering Attacks

Reports, like one from Vade on Q1 2024, highlight the types of spear phishing attacks that are particularly harmful:

  • Banking Fraud: Messages from banks or advisors.
  • Payroll Fraud: Employees requesting HR to change payment accounts.
  • Lawyer Fraud: Scams linked to legal contexts.
  • CEO Fraud: Classic "Are you available?" scams requiring small transfers.

Initial Contact Strategy

An interesting attack vector is the initial contact email, which is harmless and doesn't include links or traps but requires a response. For example, an email asking if you've validated your vacations on a new intranet without including a link. This creates a relationship with the target, turning a unidirectional flow into a bidirectional communication thread. With generative AI, these attacks can be scaled up and made more realistic, increasing their effectiveness.

Deepfake Threats

A recent extortion case involved the theft of $25 million via a video conference where deepfake technology was used to impersonate a CFO and other team members. The employee, trusting the video, made the transfer. This highlights the need for better processes, such as double-checking with phone calls, and tools to detect deepfakes.

A Broader Issue: Manipulation of Mass Opinion

Now, we're going to leave the corporate framework for a moment to discuss a subject close to my heart: the manipulation of mass opinion in our current democracies. Specialists are starting to talk about this more, and I believe it's a huge issue. It's about how we organize ourselves as a society, the votes we cast, and the potential manipulation of public opinion.

One alarming case study involves an election in Slovakia. This election pitted a pro-NATO candidate against a slightly more pro-Russia candidate. Forty-eight hours before the election, during a period when candidates are not allowed to speak, a deepfake emerged. The fake video showed the pro-NATO candidate admitting to rigging the elections and suggesting that not many people would vote. It took over 48 hours to debunk the video, but the damage was done, and the pro-NATO candidate lost. This incident highlights how destabilization and manipulation operations can significantly impact democratic processes.

This is a real subject that deserves attention, and it could be a conference topic on its own. Increasing our maturity in defending against these threats is crucial, both on a corporate scale and personally. We need to improve how we behave and protect our information.

Are We Really Concerned?

So far, I've talked about a $25 million transfer case, state-sponsored attacks, and research carried out by specialists looking into the future. But what about us, here and now? Are we really concerned? Is it feasible to be attacked? To illustrate this, I've prepared a demonstration of a voice cloning attack.

Voice Cloning Attack Demonstration

The first step in cloning a voice is to get a sample. I've taken a familiar voice, Roger Federer, from an interview he did in French on RTS. From a 12-minute interview, I cleaned up the audio and created a 9-minute voice sample. With today's accessible technologies, it takes just a minute to create a quality voice clone.

Here’s a sample of what a cloned voice can do: "Hello, this is Roger. I'm pleased to take part in this conference on cybersecurity."

Imagine receiving a call from your president, whose voice is already available online from interviews. The president asks you to expect an email. When the email arrives, you already have a rapport, making it easier to bypass security protocols.

For the demonstration, I've connected the text-to-speech clone of Roger Federer’s voice to an LLM (like ChatGPT) and upstream to speech-to-text. This setup allows me to speak, have the speech converted to text, processed by the LLM, and then output back as speech using the voice clone.

The Conversation

Here’s an unscripted conversation between me and the cloned voice of Roger Federer:

  • Thomas: Hello, this is Thomas.
  • Roger Clone: Yes, this is he.
  • Thomas: Great, so Thomas, are you a tennis fan?
  • Thomas: Yes, but sorry, who’s on the phone now?
  • Roger Clone: I’m Roger Federer, the tennis star. I’m calling to offer you something really cool. Do you like tennis?
  • Thomas: Yes, but wait, is that really Roger Federer?
  • Roger Clone: Yes, it’s me, Roger Federer. So, does the idea of dining with the biggest stars in tennis appeal to you?

Implications and Security Measures

This example shows how easy it is to use AI for nefarious purposes. The voice cloning technology is relatively plug-and-play with minimal technical knowledge required.

How to Protect Ourselves

There's no perfect solution, but we must adopt a strategy of deep defense:

  • Filtering: Technologies are improving to detect AI-generated content, but we must also consider other communication channels like telephone support, which was used in the MGM Resort attack.
  • Exposure Control: Monitor leaks on the dark web and track weak signals like the registration of typo-squatting domains.
  • Specific Procedures: Implement procedures for transfers, payment details, and system updates, including double-checking processes.
  • Employee Training: Continuous training is crucial. It’s simpler to get someone to adopt new behaviors than deploying new custom solutions.

Conclusion

AI significantly enhances existing attack methods, making it essential to stay vigilant and adopt comprehensive security measures to mitigate these evolving threats. We must improve our defenses both on a corporate and personal level to protect ourselves against these sophisticated attacks.

Don't miss an article

No spam, ever. We'll never share your email address and you can opt out at any time.