
Summary
Annual phishing training built around spotting typos and suspicious sender addresses doesn't address AI-enabled social engineering. In 2025, 83% of phishing emails were AI-generated with flawless grammar and highly tailored context, and AI-crafted spear-phishing achieved a 54% click-through rate versus 12% for human-drafted messages. This article explains what financial services CISOs must add to security awareness programs in 2026. For the full threat landscape, see our 2026 Report or Financial Services.
Why is traditional phishing training falling behind?
Traditional training teaches staff to look for cues AI has now erased: broken grammar, generic greetings, crude formatting, implausible pretexts. As shown in our 2026 Social Engineering Risk Report, generative AI drove a 1,265% increase in cyberattacks in 2025. AI-generated phishing emails exhibit flawless grammar and highly tailored context, and polymorphic attacks ensure no two emails look alike, bypassing signature-based filters.
55% of cyberattacks targeting financial organizations in 2024 and Q1 2025 were social engineering (Positive Technologies). Staff trained only on email phishing are unprepared for a deepfake video call from their own CFO or a cloned-voice vishing call from IT support.
What three capabilities make AI social engineering different?
Hyper-personalization at zero marginal cost. AI collapses the cost of spear-phishing, allowing attackers to generate 10,000 unique, hyper-personalized emails in moments. Tools scrape public data (LinkedIn, earnings calls, press releases) to craft contextually accurate messages that achieve a 54% click-through rate versus 12% for human-drafted. Cybercrime kits like "InboxPrime" use LLMs to generate polished, tailored phishing emails with real-time spam diagnostics.
Multichannel attack sequencing. AI powers coordinated attacks across email, SMS, voice, and video simultaneously. 50% of financial attacks now hide payloads in PDFs or QR codes. An attacker can send a convincing email, follow up with a cloned-voice call, and join a deepfake video meeting; all reinforcing the same pretext.
Multilingual scale. AI now produces grammatically perfect lures in any language, targeting global branches without manual rewriting. Language barriers and cultural context errors, once reliable red flags, are gone.
What must a modern security awareness program include?
| Program Element | Traditional Training | Modern AI-Aware Training (2026) |
|---|---|---|
| Email phishing simulation | ✅ | ✅ (still essential) |
| SMS phishing (smishing) | Sometimes | ✅ |
| Voice phishing (vishing) | Rarely | ✅ (with voice cloning scenarios) |
| Deepfake video recognition | No | ✅ |
| Multichannel "stitched" attack scenarios | No | ✅ |
| QR code / PDF payload awareness | Rarely | ✅ |
| Frequency | Annual | Continuous / quarterly |
| Personalization | Generic templates | Role-targeted simulations |
| Metrics | Completion rate | Behavior change (click, report, time-to-report) |
How should financial services CISOs structure training in 2026?
Four principles make the difference between compliance training and behavior change.
Train by role, not by headcount. Finance, treasury, executive assistants, and HR face different attack surfaces than developers or sales. Segment your program. Finance teams need vishing, executive impersonation and deepfake CFO scenarios; HR needs fake-candidate and W-2 scams; IT help desks need MFA-reset social engineering simulations.
Simulate what attackers actually use. If attackers use voice cloning, your simulation must use voice cloning. If they use multi-channel sequences (email then call), your simulation must too. Arsen's 2026 Report recommends launching campaigns that start with an email and follow up with a vishing call or deepfake voicemail; testing the hybrid "stitched" attacks that real adversaries deploy. Arsen's platform supports multichannel social engineering simulation: phishing, vishing, and combined scenarios.
Measure behavior, not completion. Completion rates tell you nothing about resilience. Track click rate, report rate, time-to-report, and susceptibility trend over time for each role segment. Arsen's Report specifically flags the risk of relying solely on "click/no click" metrics: CISOs should also measure behavioral responses in conversation, such as whether staff verify a caller's identity.
Teach adversarial thinking. Staff who understand how attackers think recognize new variants faster than staff who've memorized a checklist. Include short modules on threat modeling and attacker motivation alongside tactical simulations.
What KPIs should a CISO report to the board?
| KPI | What It Tells You | Target Direction |
|---|---|---|
| Phishing click rate | Email susceptibility | Declining |
| Vishing susceptibility rate | Voice attack resilience | Declining |
| Report rate | Active defense culture | Rising |
| Median time-to-report | Speed of detection | Falling |
| Repeat-clicker rate | Training effectiveness gaps | Falling |
| Cross-channel scenario fail rate | Multichannel readiness | Falling |
What does the regulatory landscape require?
Regulators are moving fast. DORA mandates resilience against all ICT disruptions including AI-powered attacks. The EU AI Act governs internal AI adoption and compliance. Testing frameworks like TIBER-EU are updating scenarios to include AI-amplified threats. Financial institutions must embed AI risk assessments directly into existing compliance programs and update incident response plans for AI-driven attacks. Read our full regulatory risks breakdown.
FAQ
Monthly phishing simulation for all staff, quarterly vishing for high-risk roles (finance, treasury, executives and their assistants), and at least one annual multichannel scenario involving deepfake or voice-cloning elements.
Not if framed correctly. Position simulations as practice. Share aggregate results, avoid naming individuals, and treat clicks as learning moments. The reporting process for suspicious emails and calls should be identical and familiar to all employees.
Treating phishing and vishing as separate, unrelated events. Arsen's 2026 Report explicitly recommends unifying your social engineering playbook: stop running siloed campaigns and integrate them to reflect real attacker behavior.
Never feed employee PII (names, emails) directly into public generative AI models. Use placeholders (merge tags) in prompts and populate data locally during post-processing. Arsen's Report includes specific guidance on sanitizing AI inputs for GDPR compliance.
Track susceptibility reduction per role, incidents avoided (reported before execution), and time-to-report. Compare the total against the cost of a single averted wire fraud. With AI-driven fraud losses projected to reach $40 billion by 2027 (Deloitte), the ratio typically favors the program by one to two orders of magnitude.
97% of U.S. banks and 100% of EU financial institutions experienced indirect data exposure in 2024 following third-party compromise. Training your own staff isn't enough: audit vendor security practices and map supply chain dependencies. Read our supply chain risk article.
Stay ahead of advanced cyber threats. Discover key social engineering risks and readiness insights for financial security leaders.