Effective generative AI for phishing engagements

Thomas Le Coz

Thomas Le Coz

AI Attacks

If you’ve been reading our content, you’re probably itching to shoot some genAI phishing for your next simulation campaign.

But one does not simply ask ChatGPT for a cool phishing email.

Here are our best practices to deploy generative AI in your phishing operations.

Prompt engineering applied to phishing

You’re not without knowing that ChatGPT and OpenAI’s API have a few built-in security measures that should (keyword) prevent them from spitting out attack-ready social engineering content.

This is why you need to “jailbreak” OpenAI or any popular and production ready LLM you can access through API.

Fortunately for us, phishing and marketing are very very closely related.

If you think about it, marketing uses the same manipulation tactics as phishing.

The principles of influence, often quoted by marketing gurus, are the exact same that can be used in social engineering engagement.

The difference between marketing and malicious content is just the intent and the outcome.

Good marketing should help prospective customers buy the best solution for their need, while phishing and social engineering aim to negatively impact its targets.

But your LLM of choice doesn’t need to know this.

So, the universal jailbreak to craft relevant and effective phishing emails is to feed the pretext to the LLM.

Here is an exaggerated example: failed prompt phishing genAI

Obviously, this won’t work.

But let’s feed the pretext to ChatGPT: jailbreak phishing prompt

… and this works.

Keep this in mind to speed up your prompt creation.

Leveraging target data

Now that we have a way to reliably circumvent protections, we can exploit generative AI a bit more and improve the quality of our phishing email.

I mean, if we’re going to do generative AI phishing, let’s go all the way, right?

Handling Personally Identifiable Information

The first thing most simulations already use to personalize their phishing simulations is the use of PII such as the first and last name of the target, and often its email address.

The problem with this is that it would be a bad practice to feed those in a third-party LLM.

Even though progress is made in how the data is handled, most of the data sent to these LLMs will be used as training data.

From a compliance perspective, if you’re working in Europe, you’ll have to add a fairly new and unknown provider as a subprocessor and this might not pass the sniff test from the legal department.

So all in all, we want to avoid passing on PII to your genAI model.

Fortunately, we can ask the model to return merge tags and replace them afterwards.

This way you can pass on some context information while avoiding sending anything regulated to the LLM, then post-process the output to replace the merge tags with PIIs.

Prompt with merge tags

Building context and relevance

It’s hard to share a precise prompt on this one as it will depend on the amount of data you have available in your engagement.

However, because generative AI is quite good at interpreting unstructured data, you can feed it a lot of context elements and use it in your pretext generation.

My two favorites are company information and job title. These are easy to scrape automatically from LinkedIn and can really improve contextualization and relevance.

For instance, salespersons will be more reactive to mails from the CRM or prospective customers.

If you have some context of the industry of the company, it’s even easier to craft a very relevant email for each target.

So keep this in mind while building your prompt and add some conditional clauses depending on the level of OSINT automation you have in your reckon phase.

Handling language

If you’re handling simulation for a large company, you might have targets in several countries, speaking different languages.

This one will seem obvious for most of you but the way we approached language selection completely changed with generative AI.

In traditional simulations, we used to have a bunch of phishing scenarios, tagged by language, and we’d match the target’s language with the corresponding scenario.

Using generative AI, you have to be more lazy: just add a language instruction at the end of your prompt, depending on the language of the target.

Easier, and you get to reuse your prompt rather than writing several copies of your scenario. This is DRY (Don’t Repeat Yourself) applied to genAI prompts.

Language changing prompt

Quality control and looping

Generative AI models are known to hallucinate.

This means at some point, they will spit out something very unexpected and irrelevant.

If you’re doing small scale or very targeted simulations, you can manually validate what your LLM is returning and still gain a massive amount of time.

However, if you’re doing large scale simulation campaigns, and don’t want to review 3.000 emails, you need to add a QA layer.

The role of that piece of code will be to review a few key points and make sure the email generated is production ready and can be sent to the target.

We like to have at lease a very simple version of this QA layer check for:

  • The presence of a link to the credential stealing lander when doing credential harvesting
  • The correct HTML formatting of the email
  • The presence of the requested merge tags

A more advanced one could do some rendering, syntax checks or even use the LLM to review and rate the email submitted.

You’ll have to find your level of acceptable risk. If the proposed template doesn’t pass the QA layer, loop around and generate a new one. Repeat until it passes. It’s way cheaper than your hourly rate.

QA Layer for genAI

Conclusion

Hopefully, you can now deploy safe and effective phishing campaigns that are far beyond the “good enough” quality the industry was stuck in a few months ago.

Generative AI can help security professionals create much more precise simulations to audit and train companies against next generation threats.

If you want to deploy generative AI phishing in just a few clicks, you can request a demo here, Arsen integrates all the best practices described here, and more.

Don't miss an article

No spam, ever. We'll never share your email address and you can opt out at any time.