Using third party LLMs for genAI phishing operations

Thomas Le Coz

Thomas Le Coz

AI Attacks

With the rise of genAI phishing, you might be tempted to use it for your phishing operations.

Using a third party, well trained LLM has obvious advantages, from infrastructure cost — this GPU time isn’t cheap — to ease of interaction with over the counter APIs, but it also introduces a new piece of tech in your stack, and a few safety and regulatory considerations.

In this article, we’ll go over two key points to keep in mind if you’re using a third party LLM for your phishing simulations.

Not Sharing Personally Identifiable Information

Third party LLMs are constantly evolving. Part of the evolution is due to the fact that they use user interaction and submitted data as training data.

This means that the information you're sending the LLM can — and will be — used to train the model and improve its result.

So, as a rule of thumb, you should avoid sharing anything sensitive with a model you don’t own and control.

Now, on top of this, there is a very particular type of data you absolutely don’t want to share with an external LLM: Personally Identifiable Information, or PII.

Depending on the country you’re operating from and clients you’re working with, PII will be subjected to privacy laws.

Sharing PII with a third party will be problematic, especially if you’re unsure about how this data will be used, how long for and to what end.

Given the young age of most generative AI companies, they will be considered a security and privacy risk by many, so you can’t directly use third party LLM to generate and send phishing simulations.

One easy trick to avoid sharing PIIs with external LLMs is to use merge tags.

Rather than feeding sensitive information to the LLM and obtaining a ready-to-use email, you should craft your prompt to return an email template with merge tags.

Then, add a post-processing function that will render the final email that your phishing kit will be able to send.

You get the best of both worlds:

  • Unique, automatic phishing email generation thanks to genAI
  • Full control over PII transfer thanks to your phishing infrastructure and software

post-processing genAI phishing

Here’s a simple rule you can append to your prompt to get it to return merge tags (feel free to adapt):

PII through LLM Phishing

Jailbreaks

Most third party LLMs have protections in place to avoid being used for nefarious purposes.

Offensive security, by definition, looks a lot like a legitimate threat in many regards and this can be a problem if you want to craft phishing emails using a third party LLM like OpenAI.

However, with these protections came the use of “jailbreaks”, or ways to craft your prompt that would allow you to generate content that shouldn’t be allowed otherwise.

We’ve talked about it before, but the most universal jailbreak you can use when it comes to social engineering attack generation is what we call “feeding the pretext”.

Because of the very nature of any social engineering attempt, you’ll have a pretext in mind when you craft your email or attack scenario.

Because a legitimate email from the IT department, asking someone to log in to a portal to download a security patch will be very similar to a phishing attempt, you can simply feed this pretext to your LLM of choice to get a very usable output.

Example Phishing Prompt PathIT

Conclusion

This short article should have provided you with the two main points you need to take into account before using a third party LLM, especially if it’s cloud based and you don’t control where the data goes.

There are other issues to take into account while using generative AI for offensive security operations, especially regarding social engineering — which involve human interactions.

For instance, you’ll want to make sure the LLM doesn’t hallucinate before pushing your content in production. You’ll also want to have some way of measuring content quality, making sure that it has the element you need to compromise the target.

Your credential harvesting attempt email will be less effective if the LLM doesn’t return a link to your attack landing page for instance, and this happened to us.

But these are generic problems, beyond the scope of this article, we talk a little bit about it in our article on generative AI phishing engagements.

If you want to use a ready-made, GDPR compliant phishing kit powered by genAI, please reach out and ask for a demo here.

Don't miss an article

No spam, ever. We'll never share your email address and you can opt out at any time.