Spreading malware or gaining initial access through email attachment is the third most common phishing tactic.
Because of the scale of our phishing operations at Arsen, we wanted to explore the use of generative AI to craft malicious attachments.
This has many interesting use cases, from being able to conduct black box operations to audit and train users against highly realistic threats, to simulating real world threats to audit and train clients.
I thought I’d share our findings in this article.
Malicious payload generation
It didn’t take long for genAI to be used to generate malware.
LLMs, in great part due to their training datasets, are really good at spitting out code.
We’ve even seen case studies on how ChatGPT was used to generate stealthy malware.
One very interesting thing with generative AI is its ability to generate code in many different ways, creating various payloads without an obvious pattern and with different code signatures each time.
So, using a collection of jailbroken prompts, attackers can generate dropper variations that would be very hard to detect.
We don’t go this far at Arsen and stop at catching the intent, the behavioral data point that allows us to predict high-risk in employee behavior.
This is why we don’t have field experience to share on active malware generation, but if you want to collaborate on a paper with us on the subject, we’ll be happy to give you access to our phishing infrastructure.
Social Engineering pretext generation
One phishing technique we’ve seen in the wild relies on sending an innocuous attachment that leverages social engineering to make people click on a link and download malware.
We’re back to content generation — which genAi also does well.
Keeping this idea of having different payloads and pretexts for each target, avoiding pattern detection and training repetition, generative AI can be used to create variation of attachments that would yield to compromission in real world applications.
Here’s what we like to do.
First, we want to generate a text document commonly transferred by email in a corporate environment (PDF, Word, …).
This shouldn’t trigger red flags: it’s common practice to exchange these documents by email and as long as there is no active payload in it, this will bypass most filters.
Second, we want to make it really hard for the user not to click. We’ll use curiosity, urgency, authority and all the common manipulation techniques we can deploy to get the user to click.
Third, we like to pretend that something is wrong in the file and that they should click on the link inside the document either to display it correctly or to patch their document reader.
Thanks to generative AI, we can generate a lot of variations on the fly and reduce patterns to then reduce chances of detection:
- Title of the document
- Content of the document — which will in turn influence its size, appearance and of course, checksum
- Subject of the email
- Content of the email
Add some randomization on the sending domain and infrastructure — if your phishing kit doesn’t allow for it, drop us an email, we’ll show you what we can do ;) — and you’ll have something VERY stealthy in your hands.
Conclusion
I hope this article shows some of the ways AI can be used to generate malicious payload for your phishing engagements.
The high level of variability and uniqueness that can be generated with AI increases attack stealth and allows for larger scale attacks on top of specific use cases.
If you want to experiment with phishing simulation sending attachments generated by genAI, you can request a demo here.