The lure with no link
Email security gateways spent the last fifteen years getting better at four things: scanning URLs for malicious destinations, sandboxing attachments to detect malware, validating sender domains via SPF/DKIM/DMARC and pattern-matching content for known phishing signatures. Callback phishing defeats all four by removing the things they scan for.
A callback phishing email contains: plausible-looking business communication (a renewal notice, an invoice, a refund offer), a sense of urgency (your subscription auto-renewed, you have a charge to dispute) and a phone number to call. No malicious URL. No attachment. No spoofed sender domain. Nothing for the gateway to catch.
The malicious payload is what happens after the victim calls. That payload lives on a phone line, where no email security tool can inspect it. The technique is documented in the research literature as callback phishing, TOAD (telephone-oriented attack delivery) and hybrid phishing - all describing the same email-to-voice handoff.
This pattern has grown sharply in the threat landscape because the email-side technical defenses have grown effective at the things they scan for, leaving voice as the remaining gap. Our phishing trends roundup for 2026 covered this as one of the year's defining patterns; this post is the deeper operational treatment.
Anatomy of a callback phishing campaign
Stage 1: The lure email
The lure is intentionally low-tech in appearance. Common pretext patterns:
- Fake subscription renewal: "Your antivirus subscription auto-renewed for $499. To cancel, call 1-800-XXX-XXXX within 24 hours." The brand is usually a consumer-tech-support category (antivirus, identity-protection, geek-help) where consumers do have legitimate subscriptions and where a surprise renewal is plausible.
- Fraudulent invoice: "Invoice #38427 for $1,250 has been processed against your account. If you did not authorize this charge, call 1-800-XXX-XXXX to dispute." The "invoice" might be attached as a benign PDF (which sandbox-scanning won't flag), or referenced by number only.
- Fake refund offer: "You are owed a refund of $379. Call to claim within 7 days or it will expire." This pattern targets human greed and time pressure; the call-center then walks the victim into a banking-screen-share scenario.
- Internal-impersonation: "HR needs to discuss your benefits enrollment. Please call extension XXXX or 1-800-XXX-XXXX." This pattern leverages organizational trust and is harder to spot because the pretext mimics legitimate internal processes.
The email is plain prose with the phone number prominent. Sometimes a benign attachment (a PDF "invoice" with no malware) provides additional legitimacy; the attachment is sandbox-clean by design because it isn't the payload.
Stage 2: The phone call
The victim calls. An attacker-staffed line picks up - the operation often runs out of an actual call center, sometimes offshore, with scripts and quality-control. The agent has one of several playbooks ready:
- Remote-access install: "To cancel the charge I need to refund it through our system; please install AnyDesk so I can process it." Once installed, the attacker has screen and keyboard control. Persistent backdoors get installed; credential stores get harvested; data gets exfiltrated. The "refund" is the social-engineering wrapper for the actual goal: machine compromise.
- Banking screen-share fraud: "Log in to your bank, I need to verify the refund routes correctly." The agent guides the victim through transferring funds out, framing it as receiving a refund. The victim approves transactions they do not understand.
- Credential and MFA-code harvest: "Read me the verification code that just texted you so I can confirm your identity." The victim reads a real MFA code that is granting the attacker login to one of their accounts in real time. This pattern intersects with AiTM phishing - the email lure delivered the victim into the call, the call harvests the live MFA token.
- Information harvest for follow-on attacks: "I need to verify your account; what's your employer name, work email, manager's name?" The collected information feeds future spear-phishing or BEC campaigns against the victim's organization.
Stage 3: The follow-on
Whatever was harvested or installed in Stage 2 becomes the staging point for the actual financial fraud or breach. Money moves out of accounts; ransomware deploys to networks the victim's compromised device can reach; credentials drive lateral movement.
From a victim's perspective the bad event happens days later when the bank calls about the unauthorized transfer or IT calls about the ransomware on the network. The phone call itself, in the moment, felt routine - they were trying to "resolve a charge" or "claim a refund."
Why traditional defenses fail
Standard email-security infrastructure misses callback phishing because:
- URL scanning has no URL to scan. Lures include only a phone number. URL-pattern detection, sandbox detonation and click-time scanning all need a URL to operate on.
- Attachment sandboxing has no malware to find. If an attachment exists, it's a benign PDF or text file. Sandbox detonation finds nothing because the payload isn't the file.
- Domain authentication doesn't catch it. Attackers register a sending domain that isn't impersonating a specific brand, or use a hosted email service that has its own valid SPF/DKIM/DMARC. The email passes authentication because the sender genuinely owns the domain - it just happens to be a malicious actor.
- Content pattern matching is fooled. The pretext language closely mimics legitimate vendor communication (because the vendor templates are public, easy to clone). Generic "phishing language" detectors don't fire on well-written invoice or renewal copy.
- The actual social-engineering happens off-channel. Even if the email gateway flagged the email as suspicious, the email is just an invitation. The attack lives on a phone call no email tool can inspect.
Some vendors have started offering content classifiers tuned specifically for callback patterns (looking for "phone-number-but-no-other-content" signals, urgency-language scoring, etc.). These help at the margin but don't close the gap because the lures look enough like legitimate vendor communication that any classifier strict enough to catch attacks also generates intolerable false-positives on legitimate invoices.
The defense pattern that works
If you can't catch callback phishing at the email layer reliably, the defense has to move to the human layer: training, reporting and remediation. The four pieces:
1. Add callback templates to your simulation rotation
Most phishing simulation programs are heavily biased toward link-based and attachment-based lures. Callback patterns are an entire attack category that goes untested in many programs. Quarterly callback simulations are the minimum; the user behavior they reveal (call rate vs report rate vs ignore rate) is different enough from link-click behavior that it deserves its own KPI track. Users who never click but readily call have a different remediation focus than users who click everything.
2. Train the report-not-call reflex
The defensive reflex for callback phishing is the same as for link phishing: recognize the lure and report it via the phishing-report add-in. Training content has to teach users explicitly: an unexpected email containing only a phone number, especially with urgency cues, is a phishing report - never a call to verify.
Many users mentally classify "calling a number to check" as the safe-and-responsible option (versus clicking a link, which they have been trained to fear). The training has to actively counter that intuition. The right answer is never "I called the number to confirm." The right answer is "I reported the email."
3. Document the no-cold-call vendor list
Tell employees explicitly which vendors do and do not cold-call. Most legitimate B2B vendors transact through procurement portals; consumer-tech subscription brands rarely cold-call customers about cancellations; HR and IT departments inside the organization have their own internal communication channels. Documenting this list and training on it removes the ambiguity callback phishing exploits.
The list does not need to be exhaustive. It needs to cover the categories attackers most often impersonate: consumer-tech-support, fake refunds, fake-invoice, internal-HR-impersonation. Three or four sentence policy clarifies expectations; a long policy nobody reads does not.
4. Auto-assigned remediation when users fail
When a user calls a simulated callback number, fire auto-assigned remediation training within hours, not weeks. The training module should be specific to callback (not generic "be careful with phishing") and should walk through the pattern they fell for, the report-not-call reflex and the no-cold-call vendor list. Behavior-triggered training lands; quarterly all-hands does not.
Track completion rate per simulation as a program-quality metric alongside call-rate and report-rate. A program that gets call rates down to 1% but never tracks training completion has only optimized one of the three things that matter.
Reporting metrics that matter
Three metrics belong on the executive packet for any callback simulation campaign:
- Call rate: Percentage of recipients who called the listed phone number. This is the primary failure metric - the equivalent of click rate for link-based phishing. Mature programs trend below 3%.
- Report rate: Percentage of recipients who reported the email via the phishing-report mechanism without calling. This is the success metric - rising report rate is rising program maturity.
- Time-to-report: Median time from email receipt to first report. Sub-15-minute medians indicate strong organizational reflex; multi-hour medians indicate report training and reflex are weak.
For broader context on what to put in the board-level phishing-program packet, see our executive metrics piece.
Where this fits in the broader threat landscape
Callback phishing is one of several attack patterns that have emerged or matured in recent years specifically because traditional defenses have improved at what they were designed to catch. The pattern shares a structural insight with OAuth consent phishing, MFA fatigue and AiTM proxy phishing: when defenders close one door, attackers find another. The attacker-side innovation is to identify channels that the existing defensive infrastructure doesn't inspect.
Defenders need to update their threat model and their testing program in step. A phishing simulation program that only tests link-based and attachment-based lures is testing the threats from 2018, not the threats from 2026. Incident response procedures for callback victims also differ from link-click incidents - the response includes phone-record review, banking-system audit and remote-access-tool inventory rather than just session-token revocation.
Bait & Phish supports callback phishing simulation as a first-class campaign category. The platform routes simulated callback numbers to managed lines that record metadata (call time, duration, caller-ID), tracks call-rate alongside click-rate as separate KPIs and triggers auto-assigned remediation training the moment a user calls. A free trial up to 25 users includes the full callback-pattern template library; contact us if you want a walkthrough specific to your industry's pretext patterns.
Frequently asked questions
What is callback phishing?
Callback phishing is a phishing email or SMS that contains no malicious link or attachment, only a phone number. The victim is induced to call; an attacker-staffed line then social-engineers them into installing remote-access software, transferring funds or sharing credentials. Also known as TOAD (telephone-oriented attack delivery) or hybrid phishing.
Why does callback phishing bypass email gateways?
Email gateways scan URLs, attachments, sender domains and content patterns. Callback emails contain no URLs, often no attachments, valid sender authentication and content that mimics legitimate vendor communication. Nothing the gateway is built to catch is present.
What pretext patterns are most common?
Fake subscription renewals (consumer-tech-support category), fraudulent invoices, fake refund offers and internal-process impersonation (HR, IT helpdesk).
What happens after the victim calls?
Attacker-staffed lines run scripted social engineering: remote-access install, banking-screen-share fraud, MFA-code harvest or information harvest for follow-on attacks. The phone call is the actual attack vector; the email was just the lure to start it.
How does callback phishing differ from voice phishing (vishing)?
Vishing is attacker-initiated cold-calling. Callback phishing is victim-initiated - the email or SMS provokes the victim into calling. The victim has already self-validated the urgency by dialing, lowering skepticism.
How do you simulate callback phishing for training?
Run an email simulation with a callback pretext (fake invoice or renewal). Track call-rate, report-rate and time-to-report. The phone number routes to a managed line that records metadata; auto-assigned remediation training fires for users who called.
Related reading: Phishing Trends 2026 covers callback phishing in the context of the year's broader patterns. MFA-bypass phishing attacks covers the AiTM and MFA-fatigue patterns referenced above. Phishing-click incident response covers the procedural side. The phishing & security awareness glossary defines all the terms used here.

