Key Takeaways
- Traditional completion rates measure activity, not risk reduction — seven behavioural metrics provide better evidence of programme effectiveness
- At 6-12 months, expect 40-60% reduction in baseline phishing susceptibility, with significant variation by organisation size and industry
- Real-world threat reporting rates matter more than simulation click rates — they prove training transfers to actual security scenarios
- Academic research shows mixed results for training effectiveness — sustained, frequent programmes outperform annual compliance-focused approaches
- A structured leadership reporting framework using business language drives better programme support and budget allocation
Six months into your security awareness programme, leadership asks the critical question: “Is this actually working?” Traditional metrics like completion rates tell you training happened — not whether it reduced risk. According to a 2025 industry benchmarking report, baseline phishing susceptibility drops from 33.1% to approximately 20% within the first six months of consistent training — but only if you measure the right indicators.
The challenge facing most programme managers at this stage is distinguishing between measurement theatre and genuine risk reduction. Recent academic research has found that training interventions don’t always produce the clear-cut improvements that vendor case studies suggest. This creates an urgent need for metrics that survive scrutiny from finance teams and boards who are deciding whether to continue funding your programme.
Why Your Current Metrics Are Probably Misleading
Most organisations track what’s easy to measure rather than what matters for security outcomes. Training completion rates consistently rank as the primary metric in programme dashboards, yet research from a major US healthcare system showed no significant relationship between recent training completion and phishing resistance. Employees who had just finished annual training clicked malicious links at the same rates as those who hadn’t trained in months.
The second common measurement error involves treating all phishing simulation results as equivalent. A 15% click rate on an obviously fake email means something entirely different than 15% on a sophisticated, personalised attack. The federal framework for rating phishing difficulty addresses this problem directly — yet most programmes ignore simulation complexity when interpreting their results.
According to 2024 breach investigation data, 68% of confirmed data breaches involved the human element. The median time from receiving a phishing email to clicking the malicious link is just 21 seconds. These figures underscore why completion certificates don’t correlate with security outcomes. The question isn’t whether people attended training — it’s whether they can recognise and respond to real threats under pressure.
What Should Your Programme Look Like at 6-12 Months?
Industry benchmark data from over 67 million simulated phishing tests provides clear staging expectations for programme maturity. At the six-month mark, well-implemented programmes typically show a 40% reduction from baseline phishing susceptibility rates. By twelve months, the strongest programmes achieve an 86% reduction.
However, these figures require contextualisation by organisation size and sector. Healthcare and pharmaceutical organisations start from a higher baseline risk of 41.9%, while smaller organisations (1-250 employees) begin at 24.6% compared to 40.5% for enterprises with 10,000+ staff. Your programme’s progress should be measured against relevant peer groups, not global averages.
The six-to-twelve month period typically represents a critical juncture where initial enthusiasm has settled into routine, but behavioural change is still developing. Expect to see inconsistent month-to-month results as different employee groups respond at varying speeds. The key indicator at this stage isn’t perfect consistency — it’s a clear downward trend in risky behaviours when measured quarterly.
The 7 Metrics That Actually Prove Training Effectiveness
1. Phishing Reporting Rate During Simulations
This measures the percentage of employees who identify and report simulated phishing emails rather than ignoring or clicking them. According to 2024 industry data, 20% of users successfully identify and report phishing in simulation exercises. Track this monthly and aim for steady improvement over time.
Calculate this as: (Number of employees who reported the simulation ÷ Total number who received it) × 100. A rising reporting rate indicates employees are developing threat recognition skills that transfer beyond the simulation environment.
2. Real-World Threat Reporting Rate
This metric tracks how often employees report actual suspicious emails they receive outside of simulations. It’s the strongest indicator that training is transferring to real-world behaviour. Many organisations never track this despite its importance for proving programme value.
Measure the monthly volume of legitimate security reports from employees divided by your workforce size. An increase in real threat reporting suggests employees are applying their training when it matters most. This metric directly correlates with reduced successful phishing attacks.
3. Repeat-Clicker Reduction Rate
Define repeat clickers as employees who click malicious links in three or more simulations within a six-month period. Recent research showed that targeted personalised training reduced repeat phishing victims by 63% within six months when organisations implemented specific intervention protocols.
Track the percentage of your workforce classified as repeat clickers each quarter. A declining trend indicates your programme is successfully addressing the highest-risk individuals. This metric is particularly valuable for board reporting because it demonstrates risk concentration management.
4. Average Time to Report Suspicious Emails
Speed of detection matters as much as detection itself. Measure how quickly employees report suspicious emails from the time they receive them. This metric captures both awareness and confidence levels — employees who understand threats report them faster.
Benchmark against the 21-second median click time from breach data. Employees who can identify and report threats within this window provide your organisation with genuine protective value. Track this metric monthly and celebrate improvements in detection speed.
5. Knowledge Retention Score Trends
Regular knowledge assessments provide evidence of learning retention beyond immediate post-training scores. Test the same core concepts quarterly using varied question formats. Look for sustained improvement over time rather than perfect scores immediately after training sessions.
Focus on practical scenario recognition rather than policy memorisation. Effective assessments test whether employees can identify social engineering tactics, recognise urgency manipulation, and apply appropriate response procedures under realistic conditions.
6. Security Incident Attribution Rate
Track the percentage of security incidents that can be attributed to human error versus technical vulnerabilities. A declining human-attributed incident rate over 6-12 months provides strong evidence of programme impact on actual business risk.
This metric requires coordination with your incident response team to categorise root causes accurately. The goal isn’t zero human involvement — that’s unrealistic. The target is a measurable reduction in preventable human errors that lead to security events.
7. Miss Rate (Neither Click Nor Report)
This often-overlooked metric measures employees who neither click malicious links nor report them — they simply ignore suspicious emails. While not clicking is better than clicking, ignoring potential threats provides no protective value to the organisation.
Calculate as: (Total simulation recipients – Clickers – Reporters) ÷ Total recipients × 100. A declining miss rate indicates employees are becoming more engaged with security rather than simply avoiding obvious risks. This engagement translates to better threat detection in real scenarios.
How to Handle Contradictory Research Evidence
Academic research presents a more complex picture than vendor marketing materials suggest. A large-scale 2025 study of over 12,500 employees found training interventions showed no statistically significant effect on click rates or reporting behaviour in controlled conditions. This directly contradicts the positive results reported in many industry benchmarks.
The resolution to this apparent contradiction lies in programme design and measurement approach. The same research confirmed that sustained, frequent training programmes do produce measurable behaviour change, while annual compliance-focused approaches largely don’t. Additionally, most positive industry results come from organisations that chose and implemented training — a self-selected group likely to see better outcomes.
For programme managers, this means focusing on behavioural metrics over activity metrics, implementing continuous rather than annual training cycles, and being honest about mixed results in early programme stages. Acknowledging complexity builds more credibility with leadership than presenting unrealistically positive progress reports.
Building a Leadership Presentation Framework
Converting metrics into executive-friendly reports requires translating security data into business language. Structure your quarterly board presentation around three core questions leadership actually wants answered: Are we safer than six months ago? How do we compare to similar organisations? What specific risks remain?
Start with context from external threat data. According to 2025 industry reports, the global average breach cost reached $4.44 million, while organisations with mature security awareness programmes reduce their average breach cost by approximately $233,000. This framing positions your programme as risk management, not training administration.
Present trend data rather than point-in-time snapshots. Show six-month trajectories for your key metrics alongside industry benchmarks relevant to your sector and organisation size. Conclude with specific, time-bound recommendations for programme improvements based on the data patterns you’ve observed.
Platforms like Complorer provide built-in executive reporting templates that automatically translate technical security metrics into board-ready business language, removing the manual work of creating leadership presentations from raw programme data.
What to Do About Repeat Clickers and Non-Responders
Research from multiple academic sources confirms that additional mandatory training doesn’t improve outcomes for the most susceptible employees. Instead, implement a supportive coaching approach that treats repeat clicking as a skills gap rather than a compliance failure.
Establish a three-strike protocol: first click triggers automated additional resources, second click prompts a brief one-to-one coaching session, third click initiates a structured support plan developed jointly with HR. Track whether these interventions improve subsequent simulation and real-world reporting behaviour.
For employees who consistently neither click nor report (high miss rate), consider different intervention approaches. These individuals may have strong threat instincts but lack confidence to report suspicious activity. Focus on building reporting comfort rather than threat recognition skills.
Addressing Legal and Privacy Considerations
Individual-level behaviour tracking raises data protection concerns under applicable privacy regulations. Collecting detailed logs of who clicked which simulations, assigning individual risk scores, and tracking personal improvement metrics may constitute employee monitoring that requires specific legal safeguards.
Before implementing comprehensive individual tracking, consult your legal and HR teams about relevant employment law and data protection requirements in your jurisdiction. Focus programme metrics on aggregate trends rather than individual performance where possible to reduce legal complexity while maintaining programme effectiveness measurement.
Consider anonymising individual data for trend analysis while maintaining the ability to provide targeted support when employees request help with security skills development. This approach balances programme measurement needs with employee privacy expectations.
What Should You Do Next?
Begin by auditing your current measurement approach against the seven metrics outlined above. Most programmes at the 6-12 month stage are already collecting the necessary data — the gap lies in analysis and presentation rather than data collection. Calculate your real-world threat reporting rate and repeat-clicker trends first, as these provide the clearest picture of programme impact.
Prepare a quarterly leadership report using the business-focused framework described above. Position your programme results within industry context and be honest about areas where progress remains limited. Leadership responds better to measured honesty than unrealistic optimism when making budget decisions.
Finally, if your current metrics show plateauing results or concerning trends, consider whether your programme design needs adjustment rather than simply changing measurement approaches. The evidence suggests that frequent, behaviour-focused training outperforms annual, compliance-driven approaches for sustained security outcomes.
Review your programme’s measurement framework quarterly and adjust based on emerging threat patterns and organisational changes that affect security risk.
Frequently Asked Questions
How long does it take to see meaningful results from security awareness training?
Most programmes show initial behaviour change within 90 days, with a typical 40% reduction in baseline phishing susceptibility by six months. However, sustained results require continuous reinforcement rather than one-time training events. Full programme maturity typically takes 12-18 months.
What’s a good phishing click rate benchmark for my industry?
Industry benchmarks vary significantly by sector and organisation size. Healthcare organisations typically start from a 41.9% baseline, while smaller companies (under 250 employees) average 24.6%. Focus on your organisation’s trend over time rather than absolute comparisons to global averages.
Should I be concerned if some employees keep failing phishing simulations?
Repeat clickers are normal in any programme, typically representing 5-15% of participants. Research shows that additional mandatory training doesn’t help these individuals. Instead, implement supportive coaching approaches and track whether personalised interventions improve their subsequent behaviour.
How do I prove ROI from security awareness training to leadership?
Focus on risk reduction metrics rather than training activity metrics. Present trends in real-world threat reporting, security incident attribution, and repeat-clicker rates alongside industry breach cost data. Frame the programme as insurance against the average breach cost of $4.44 million rather than as an educational initiative.
What’s the difference between simulation reporting rates and real-world reporting rates?
Simulation reporting measures performance in controlled testing environments, while real-world reporting tracks whether employees actually report suspicious emails they receive during normal work. Real-world reporting rates are typically lower but provide stronger evidence that training transfers to actual security scenarios.
References
[1] Verizon Business. (2024). 2024 Data Breach Investigations Report.

