Phishing & AI: new attacks and new solutions

Thomas Le Coz

Thomas Le Coz

Cybersecurity

Benjamin Leroux is the CMO of Advens and cybersecurity expert at [Advens]{https://www.advens.fr/en/}.

We talked about a lot of points and anecdotes regarding the impact of generative AI on the cybersecurity landscape in this interview.

I hope you enjoy the conversation ;)

Video discussion: Phishing & AI

Key takeaways

Here's a quick summary of what we talked about during this conversation.

genAI brings a lot of questions and Benjamin makes a parallel with the Cloud offerings that shaked our ecosystem.

It will follow a classic adoption curve and will end up everywhere, so the question is not if it's useful but how it will be used.

New attack patterns

GenAI brings new attack patterns to the table. It reduces existing ones, making it harder to detect by generating unique emails, polymorphic payloads and helping attackers improve their tactics.

We can't rely on typos or weird domain names when LLMs can come up with packaged attack scenarios that won't trigger alerts in its initial stages.

Regarding phishing, generative AI facilitates pretext development, from website cloning to email copywriting and payload obfuscation.

Commercial, ready-to-use LLMs are not only effective in these attacks but their security measures are also very easy to bypass.

Current attacks (CEO Fraud, credential harvesting, etc.) that already work well can be improved by generative AI, both in stealth and complexity but also in volume.

Higher attack volume

Given these improvements, the marginal costs of attack creation is getting very low, which in turn will increase the volume and scale of such attacks.

This will create more noise and alerts to manage on the defensive side, increasing congestion and the risk of having high criticity attacks getting through.

Synthetic media

Deepfakes, voice cloning and the overall synthetic media production is a big problem that already create new challenges.

First, it makes a lot of our current best practices ineffective. Double verification, the process of relying on the voice of someone to confirm an emailed instruction for instance, can be ineffective if their voice has been cloned.

Second, it can be used to manipulate democratic election or steer public opinion by creating fake content, released in a timely manner before elections.

Defensive usage

GenAI, but also Machine Learning also allows the blue teams for a better level of protection.

For instance, it can help improve the rule-based detection to take into consideration a larger part of the kill chain and detect patterns and indicators of attack attempts.

It also increases the capacity to sort and prioritize alerts, providing context and scoring.

Agility is a key success factor

Given the speed of evolution of LLMs and attack techniques, agility and the speed of successful implementation will be a key factor to be able to catch up with new attack techniques.

It's a cat and mouse game and the faster we implement solutions allowing us to detect and fend off the new volume of advanced attacks, the more likely we are to be protected.

New risks means new opportunities

We couldn't conclude the talk on a negative note.

These new risk and changing landscape brings a lot of new opportunities to fight new attack methodologies.

Some are just classic improvements and interations, others are potential disruptions in the way we treat attacks.

Moreover, we need to make people more aware about the capabilities and impact AI can have on our daily lives and decisions.

Show notes

Here are the references, with timestamp to the various resources we talked about during the discussion.

Don't miss an article

No spam, ever. We'll never share your email address and you can opt out at any time.