How to Report Phishing Test Results to the Board
Boards have an unforgiving signal-to-noise ratio. Eight to twelve directors meet quarterly with twenty agenda items competing for ninety minutes. Your security awareness section gets four pages and eight minutes - possibly less if the audit committee chair has questions. The phishing report you deliver in those eight minutes will set the next year's budget conversation, the cyber insurance underwriting tone and the board's confidence in the security program. It is the highest-leverage four pages you produce all year.
This is the format that holds up in front of an actual board. It draws on Marsh and Aon's broker-side conventions for risk reporting, ISACA's governance framing for security program metrics and a decade of practitioner experience watching board packets land or fail to. It is intentionally short, intentionally trend-oriented and intentionally written in language that translates operational metrics into governance outcomes a director can actually act on.
What boards actually want to know
Strip away the formatting and every board's phishing report is answering three questions:
- Is the program working? (Trend.)
- Where is the residual risk? (Cohort.)
- Is someone accountable for it? (Governance.)
If the four pages don't answer those three questions clearly, no amount of additional data fixes the problem. If they do, additional data is unnecessary. The structure that follows is built around these three questions explicitly.
Page 1: The executive summary
One page. Top half is text - three short paragraphs corresponding to the three questions above. Bottom half is a single trend chart with the four KPIs:
- Click-through rate by quarter (last four quarters)
- Training completion rate within 7 days
- Time-to-remediation (median hours)
- Repeat-clicker rate
The text frames each KPI in board language. Not "click-through rate fell from 18% to 12%" but "the percentage of staff who fell for our quarterly test fell from 18% to 12%, in line with industry expectations for a second-year program." Notice the implicit benchmark - the board doesn't have to ask "is that good?"
Every executive summary should also reference at least one external authority. NIST CSF 2.0 for the framework alignment, IBM Cost of a Data Breach for the dollar context, Verizon DBIR for the threat-prevalence framing. Boards trust named external sources more than internally-derived figures.
Page 2: The cohort heatmap
One page. A grid where rows are departments (or business units) and columns are the four KPIs. Cells colored by deviation from the company average. The board should be able to scan the grid and see in two seconds where the residual risk is concentrated.
Underneath the grid: one paragraph naming the highest-deviation cohort, what intervention is happening and who owns it. "Sales operations is showing a click rate 2x our average; the manager team has scheduled targeted training and the cohort will be re-tested in four weeks. Owner: VP of Sales." That sentence is what the board is looking for. Without it, the heatmap is decoration.
If executive officers, board members or signing officers have clicked or repeat-clicked, that information goes to the audit committee chair privately, not on this page. Standard board reporting aggregates to manager or department level for both privacy and governance reasons.
Page 3: Top findings and remediation
One page. Three findings, each with a finding statement, an impact statement and a remediation status. Boards rate this page on actionability - a finding without an owner and a date is not a finding, it's a complaint.
Examples of well-formed findings:
- Finding: Voice (vishing) campaign in Q2 produced a compliance failure rate of 22%, materially higher than email and SMS.
Impact: Increases exposure to wire-fraud and credential-disclosure scenarios. Cyber insurance applications now ask about multi-channel coverage explicitly.
Remediation: Adding voice-channel training module to onboarding cohort by end of Q3. Owner: CISO. - Finding: Repeat-clicker cohort (users who failed two consecutive campaigns) is 4.2% of headcount, concentrated in three departments.
Impact: Concentrated risk in a small population; higher predictive value for actual incident likelihood.
Remediation: One-on-one targeted training plus manager engagement for all cohort members in Q3. Owner: VP HR + CISO.
Three findings is the right number. One finding looks thin; five findings dilutes the actionability. Three is the band that signals an active program without overwhelming the board's bandwidth.
Page 4: The forward roadmap
One page. Three to five program changes for the next quarter, each with a one-line description and an owner. Examples that have landed well:
- Add SMS phishing simulation to standard quarterly cadence
- Roll out role-based simulation for finance signatories quarterly outside the all-hands cycle
- Update written security awareness policy to reflect cyber insurance renewal requirements
- Begin reporting executive-cohort metrics to audit committee separately
- Refresh template library to include 2026 lure categories (AI-generated phishing, deepfake vishing, QR code phishing)
The forward roadmap is also where you signal what budget you'll be asking for at the next budget cycle. Not as a budget request - as program direction. Boards consistently respond better to security program changes that were previewed quarter-over-quarter than to changes that arrive as a budget surprise.
The appendix
Per-campaign detail. Per-cohort breakdown. Compliance framework mapping (NIST CSF, SOC 2, HIPAA, PCI DSS, ISO 27001 as applicable). Cyber insurance application alignment. Anything an audit committee member or interested director might pull on. The appendix can be as long as it needs to be because it is reference material - but it should not be confused with the executive read.
Translation rules for technical metrics
Boards do not speak the operational vocabulary of phishing programs, and they shouldn't have to. The translations that consistently work:
- Click-through rate -> "Percentage of staff who fell for our quarterly test."
- Repeat-clicker rate -> "Concentrated risk in a small population that has fallen for tests repeatedly."
- Training completion rate -> "How completely we close the loop after a click."
- Time-to-remediation -> "How quickly we contain a click before it becomes an incident."
- Difficulty mix -> "Realistic versus easy tests; we run both deliberately."
Use the technical term once, use the translation thereafter and the board reads the page faster.
The narrative voice that lands
The narrative paragraphs in the executive summary do more work than any chart in the deck. Three rules of voice that consistently land with board audiences:
- Lead with what changed. "Click rate fell from 18% to 12% this quarter, driven by improved completion of the auto-assigned training pipeline." Not "click rate this quarter was 12%."
- Name an external benchmark. "This places us within the second-year program range identified in Verizon DBIR-class data and consistent with Forrester research on the category." External benchmarks insulate the report from the "is that good?" question.
- Acknowledge uncertainty explicitly. "Our hard-difficulty template results suggest residual exposure in the finance cohort that we are addressing through targeted training in Q3." Boards trust CISOs who name what they don't know more than CISOs who present a uniformly clean story.
The narrative that hides bad news loses board trust faster than any single data point. The narrative that names bad news, frames it in context and lays out a remediation owner with a date is what builds the credibility that lets the program scale.
How to handle a bad quarter
Programs do regress. A bad quarter - click rate up, completion rate down, repeat-clicker rate flat - happens to virtually every program at some point, often driven by an external factor like an acquisition that brought in an untrained cohort, a major template refresh that introduced harder lures or a personnel change in the program ownership.
The reporting pattern that survives a bad quarter:
- Don't bury it. The cover paragraph should name the regression in the first sentence. Boards detect hedging quickly and discount the rest of the report when they see it.
- Name the cause. Acquisition cohort onboarding, template difficulty refresh, departmental restructure. A named cause turns regression from a program failure into a program event.
- Show the corrective action. Targeted training, re-test schedule, ownership assignment. The remediation roadmap is the page-three content, not the cover paragraph.
- Frame against the four-quarter trend. A single bad quarter against a falling four-quarter trend is a different story than a bad quarter against a flat trend. The trend chart contextualizes the data point.
A board that watches a CISO handle a bad quarter well typically increases trust in the program rather than decreasing it. The reverse is also true: a CISO who hides a bad quarter and gets caught loses trust that takes years to rebuild.
Integrating with cyber insurance and audit reporting
The same four-page packet that goes to the board should be the source for two adjacent reporting flows: the cyber insurance application and the audit fieldwork response. Both flows ask for the same underlying data with different framing:
- Cyber insurance broker: Wants 12-month campaign list, per-campaign click rate, completion rate, multi-channel coverage. Maps directly from the appendix of the board packet.
- SOC 2 auditor: Wants written policy, training assignment records, completion records and per-incident remediation evidence. Sourced from the same dashboard that produces the board metrics.
- HIPAA assessor: Wants annual training attestation plus per-incident behavior-triggered remediation records. Same data structure, different framework labels.
Producing the board packet, the broker packet and the audit packet from a single platform export saves the program owner roughly 40-60 hours per year compared to rebuilding each one separately. That time savings is itself a line item in the program's ROI calculation.
Common board-report mistakes
- Reporting one number without a trend. A single quarter's number invites the wrong question. Always include four-quarter context.
- Naming individual users. Privacy and governance both push back. Aggregate to manager or department level.
- Hiding bad news in the appendix. If something material is in the appendix, it should also be in the top-of-packet narrative. Boards lose trust in CISOs whose appendix tells a different story than the cover.
- Vendor logos on the cover. The board does not care which platform you use. Your name and the company's name on the cover; tooling references stay in the methodology footnote.
- "More campaigns than last year" as a metric. Activity, not outcome. Drop it.
How the platform produces this packet
Bait & Phish reporting exports the four-page structure as a single PDF, with placeholders for the narrative paragraphs that are inherently per-quarter. The KPI definitions match the executive expectation natively, the cohort heatmap renders from the campaign data without manual cleanup and the difficulty-mix and channel breakdowns sit one tab away in the appendix. Our team has been refining this packet structure since 2010 and it has held up across boards from family-owned mid-market companies to public-company audit committees.
If you'd like to see the four-page structure live against your own program data, start a 25-user free trial and run a campaign through the cycle. Pricing for full deployments is on the site, and if you want to walk through the board format with your specific reporting context, contact us directly.
Related reporting and metrics guides
- vCISO dashboard architecture
- Executive metrics that matter
- Security awareness ROI
- Click-rate benchmarks by industry