AI can be a security accelerator—but it’s also a fraud accelerator. For CISOs, the goal isn’t to “detect every deepfake.” The goal is to make sure synthetic audio/video can’t trigger irreversible actions like moving money, resetting identities, or granting privileged access.
In February 2024, the FCC issued a declaratory ruling confirming that AI-generated voices count as “artificial or prerecorded” under the Telephone Consumer Protection Act (TCPA). Certain AI-voice robocalls are therefore illegal, and the ruling signals a broader trend: regulators are applying existing rules to AI-enabled abuse. At the same time, U.S. agencies have warned that synthetic media and generative AI are making scams cheaper, faster, and more convincing.
What’s different in 2026: deepfakes are “good enough” for real operations
Attackers don’t need perfect Hollywood deepfakes. They just need “credible enough” to get a human to skip a step.
In real-world fraud patterns, synthetic media shows up most often as:
- Executive impersonation: a “CEO” or “CFO” requesting an urgent payment or a vendor banking change.
- Help desk manipulation: a “traveling exec” who can’t access MFA and “needs a reset right now.”
- Vendor/partner fraud: a “supplier” calling to update ACH details or re-route an invoice.
- Brand/investor attacks: fabricated audio/video “announcements” that spread fast enough to create confusion and reputational damage, even if later debunked.
The uncomfortable truth: as audio and video quality improves, human intuition becomes less reliable. Controls must assume impersonation is possible.
Three high-risk workflows to lock down first
Start where one successful impersonation creates outsized harm:
1) Payments and finance changes — wire transfers, ACH updates, vendor bank changes, payroll reroutes, invoice “urgency” overrides.
2) Identity and account recovery — password resets, MFA resets, device enrollment, SIM swaps, recovery email changes.
3) Privileged approvals — admin access grants, production changes, security control disablement, high-value data exports.
If you only tighten three areas this quarter, tighten these.
Controls that still work when the audio/video is convincing
Assume an attacker can sound like your CFO. Build controls that don’t depend on a person “feeling” something is off.
1) Out-of-band verification (OOB) for high-risk requests
Verify using a known, pre-trusted channel—not the channel the request came in on.
Example: If the “CFO” calls about a wire, finance calls back using the number in the corporate directory or finance system—not the number in the voicemail or email signature.
Tip: Pre-define what counts as a “high-risk request” so staff don’t have to guess in the moment.
2) Two-person integrity for money movement and bank changes
One person initiates; a second person independently verifies the request and the destination details.
“Independent” matters: the verifier should validate using a separate channel/system, not simply ask the same caller to repeat details.
3) Verified channels only (force requests into authenticated workflows)
Require sensitive requests to be submitted through authenticated systems—finance workflow tools, ERP, vendor portals, or ticketing systems with SSO/MFA.
Voice or email alone should not be sufficient for vendor banking updates, MFA resets, or privileged grants.
Implementation tip: Add a simple policy banner: “We cannot process this request over phone/email. Please submit via [System].” Give staff language they can copy/paste.
4) Delayed execution for sensitive changes (a deliberate speed bump)
A short cooling-off period (e.g., 30–120 minutes, or “end of day”) reduces the success rate of urgency-based attacks.
Use this for vendor banking changes, payroll changes, and new payee setups.
Exception handling: If your business needs emergency overrides, require executive approval through a verified channel plus a second approver.
5) “Safe phrase” or challenge-response for executives (simple, but effective)
A low-tech option: each exec has a rotating “verification phrase” stored in a secure place (e.g., password manager note shared with a small trusted group).
Use only for true emergencies—and only as an additional layer (not the sole control).
Tip: Don’t make it complicated. It should be easy to use under stress.
6) Help desk hardening (where deepfake + urgency wins)
Help desks are prime targets because they can reset identities quickly. Guardrails to add:
- Require phishing-resistant MFA for help desk admin actions where possible.
- Make MFA reset a tiered workflow:
- Tier 1: user self-service with strong verification
- Tier 2: help desk with identity proofing + manager confirmation
- Tier 3: security approval for privileged accounts
- Put high-risk actions behind “verified channel + ticket + second approver.”
Practical scenarios to train against (so training isn’t theater)
Training works when it maps to real workflows and gives staff exact behaviors—not vague warnings.
Scenario A: “CEO needs an urgent wire”
Attack: voice clone + urgency + “I’m in a meeting; don’t call me back.”
Desired behavior: finance refuses to execute until OOB verification + second approver + verified workflow request are complete.
Scenario B: “IT, my phone broke—reset my MFA”
Attack: synthetic voice posing as a senior leader, leveraging travel/urgency.
Desired behavior: help desk follows identity proofing script and escalates for privileged accounts; no same-channel verification.
Scenario C: “Vendor changed banks—update our ACH today”
Attack: email + call combo; attackers provide “supporting documentation.”
Desired behavior: verification through the vendor portal / known contact and delayed execution; confirm changes through pre-existing relationship channels.
Monitoring that’s actually useful
Rather than trying to detect deepfakes, focus on detecting the abuse path.
Metrics and alerts that help:
- Verification compliance rate for bank changes, MFA resets, privileged grants.
- Number of high-risk requests blocked due to policy (should go up at first—then stabilize).
- Time-to-detect suspicious identity events, like repeated MFA reset attempts or sudden new device enrollments for privileged users.
- Anomaly monitoring for vendor banking changes (new account + high payout + new payee in short window).
If you’re in financial services: don’t ignore the FinCEN signal
FinCEN issued an alert describing increased use of deepfake media in fraud schemes reported by financial institutions. Even if you’re not a bank, the pattern matters: synthetic media is showing up in real investigations and real loss events—not just conference demos.
Quick checklist for CISOs
- Document the required verification steps for payments, identity resets, and privileged approvals—and enforce them in tools (not just policy).
- Require phishing-resistant MFA for admins and help desk operators where feasible.
- Run a deepfake-focused tabletop exercise this quarter using the three scenarios above, and track gaps as action items.
- Align Legal + Comms on a response plan for executive or brand-targeting deepfakes (what you’ll confirm, where you’ll post it, and how quickly).
References:
- FCC — FCC 24-17 Declaratory Ruling: AI-generated voices are covered under the TCPA (robocalls) (Feb 2, 2024) — https://docs.fcc.gov/public/attachments/FCC-24-17A1.pdf
- FBI IC3 — Public Service Announcement: Criminals Use Generative AI to Facilitate Financial Fraud (PSA241203) (Dec 3, 2024) — https://www.ic3.gov/PSA/2024/PSA241203
- NSA / FBI / CISA — Cybersecurity Information Sheet: Contextualizing Deepfake Threats to Organizations (Sep 12, 2023) — https://media.defense.gov/2023/Sep/12/2003298925/-1/-1/0/CSI-DEEPFAKE-THREATS.PDF
- FinCEN (U.S. Treasury) — Alert FIN-2024-Alert004: Deepfake media in fraud schemes (Nov 13, 2024) — https://www.fincen.gov/system/files/shared/FinCEN-Alert-DeepFakes-Alert508FINAL.pdf
- FTC — Consumer Alert: Fighting back against harmful voice cloning (Apr 2024) — https://consumer.ftc.gov/consumer-alerts/2024/04/fighting-back-against-harmful-voice-cloning
