I hate almost everything about phishing tests and I think in most cases they are counter productive.
This may seem like an odd statement from such a big advocate of “You can’t improve what you don’t measure” especially as I don’t hate the tests themselves, but I do hate why and when most people do them.
The base problem is that most people struggle to answer WHY they do them. If it’s simply to tick a box on a compliance or insurance checklist, then fair enough, you have my sympathy, sometimes you have to suck it up and do it, because it’s a requirement and arguing with an auditor isn’t generally a good use of time. But what if it’s not, what if you’re doing them “to improve security“, why would I hate them then?
Dictionary.com defines a test as
1. the means by which the presence, quality, or genuineness of anything is determined; a means of trial. 2. the trial of the quality of something: (to put to the test).
So, the question I put to phishing test advocates is WHAT are you testing?
If you’re testing whether people will click links in emails, let me save you the effort. They will. Links are designed to be clicked, that’s precisely why they are there. Copywriters spend lots of time crafting their “call to action” in emails, finding ways to make the link more clickable. So people clicking links is expected behaviour and if people clicking links is so bad, why not delete any email with a link? Because clicking links is an important part of emails, that’s why.
If you’re testing whether people can tell safe links from unsafe ones. Well, you know what, that’s REALLY HARD, if it wasn’t we’d have taught computers how to do it years ago. The whole reason phishing links make it to users at all as it’s really difficult to define how to spot them reliably.
If it’s not to test, but educate, what then? I think it’s quite well established that ritual humiliation isn’t actually a good teaching method and that essentially what we’re doing. It’s like asking “are you dumber than a phisher“? This isn’t the 1990s, it’s unlikely a notable percentage of your users are new to email and unfamiliar with phishing, nobody who gets caught by a phishing test goes “wow, I didn’t realise this was a thing, thanks for the heads up“.
And if you use it to trigger some “extra training”, then what do you put in that training, it’s almost guaranteed they know whatever tips you can give for spotting dodgy links (they just missed the clues this time) and aren’t these tips that should already be in your super-wizzy AI powered mail filter anyway? No, in almost all cases, you’re essentially giving them a detention, rather than a lesson. The training is just the price they have to pay for getting caught out.
Sure, if you’re measuring click-rates across multiple tests, you may be able be able to evidence an improvement on the second test, but how much of that is simply heightened awareness and a dread of feeling like the school dunce, rather then new acquired knowledge. How quickly does that awareness fade, how often can you re-enforce it before the user become fatigued by it?
For me, in most cases (and we’ll come onto the exceptions shortly) phishing tests are little more than victim blaming and rather than shaming users for doing a thing that is designed to be done and failing to spot a thing that multiple security controls would also have likely missed, isn’t helping anyone. Google didn’t get to the position of having zero of its 85000 staff successfully “credential-phished” in 2017 by training them to spot bad links better, that did it by mandating MFA on everything. Similarly, you don’t fix whale-phishing/BEC by routinely attempting to whale-phish the CFO’s PA, you do it by being aware that whale-phishing is a thing and building processes accordingly, not hoping some person who is desperately trying to do the right thing can think like an attacker.
Improve tech, improve processes, add mitigations ….. just don’t think you’ll improve things a noticeable amount by proving to users that can be phished. Even the best of us can have an off-day and get caught.
Rather than “how not to be phished” training for those who “failed”, keep users appraised of common campaigns and techniques, share your companies success stories (i.e. talk about all the stuff you blocked upstream), gamify reporting suspected phishing rather the punishing those who miss one. Reward good behaviour, without focusing on those who got it wrong this time.
So, when do I think you should use phising tests (other than when it’s mandate by regulatory bodies etc)? Simply, when you have something to measure. I shared my key questions on twitter a while ago.
If you’re testing the effectiveness of your current security controls (e.g. mail filters), great. If you’re testing how as an organisation you react to an incoming campaign (i.e. how fast your SOC rinses all signs of it from users inboxes) great. These are both testable scenarios around existing controls and you can easily define what you class as success/failure for them (i.e. only 4 users clicked the link before the C2/payload-source/campaign was identified and blocked), however if your only findings are 20 out of 1000 people didn’t see through our deception, then you have to ask yourself what new actionable information have you gained? That each time, a handful of people will click a link designed to be clicked? If that’s all, then I suspect it wasn’t time well spent.
So, remind me again, WHY do you do phishing tests?
I recall this incident from a High School trip in PA (read about it). Basically, the teachers got together and posed as ‘terrorists’ and pretended to stage an attack on the kids. At first, they said it was a preparedness drill. When that epic-failed, they then said that this was a prank, common in these trips, and that the kids – whose grades and college entries depend on these teachers – loved them.
Yah, Pranksters are pranksters, and chuckle after about how clever they were, and how we don’t appreciate their efforts/humor/whatever. I leave these now-constant phishing tests with the comedian who passes around phony lottery tickets.