Educause Security Discussion mailing list archives

Re: attempts sending fake phishing messages to students and/or employees


From: Sam Hooker <samuel.hooker () UVM EDU>
Date: Mon, 14 Jun 2010 13:16:50 -0400


In a discussion strictly about phishing, I would happily concede Tom's
(and later, Ben's) points. But I'm more interested in the general issue
of infosec's ability to test user behavior in the wild and use those
results to drive operational decisions. I do love a good awareness
campaign as much as the next person (possibly more). But our "people"
measures ought to be imagined with defense in depth in mind, just like
our technical measures. Methodical testing of user behavior is an
important (if uncomfortable) threshold that probably needs to be crossed.

(You can stop reading now if you value your Monday. ;-))

On 20100611 08:10 , Davis, Thomas R wrote:
On Jun 10, 2010, at 9:21 AM, Dave Kovarik wrote:

With one exception, I have yet to have top level management
agree in practice that phishing one's own community was a good idea.

Agreed.  I can understand the "research" benefits of conducting fake 
phishing.  However, on the "operational" side of the house, the
benefits are minimal and the political fall out great.  The real question
is, even if you do conduct a fake phishing run against your users, what will
you do with the results?  Do better awareness training?  

Sorry for being oblique about it in my original post: by "follow-up", I
meant "respondent-targeted training". Intervention, man.


Based on real phishing success rates, I'm pretty certain the fake phishing
run will be successful too.  So, why do it?

So that we can directly benefit from the user's actions, performing
targeted training on a schedule and under parameters that *we* define.
Sure, "naturally-occurring" phishing might provide us with tons of
opportunities to target education at some of our most-susceptible
constituents. I see four problems with relying entirely upon natural
occurrence, though:

1) Per Bob Bayn's post, reliance upon natural occurrence results in a
suboptimal sample. Yes, we could find out whether a given message was
delivered to the entire university community with log analysis. A
researcher might be prepared to conduct that analysis and either wait
for naturally-occurring attacks to eventually hit everyone in the
community or extrapolate severely. But I'm looking to derive operational
intelligence, which demands a more aggressive timeline. Plus, "the
natural method" presupposes we've *detected* user responses to a given
attack.

2) We actually miss an opportunity by being solely opportunistic: we
could get ahead of the curve. Users who are currently "phish-resistant"
may only be that way because of some mutable, cosmetic property of the
attacks, like mangled use of language. Will they still make good
decisions when the attacks evolve (and attacks already appear to be
improving), or was it luck of the draw (or your malware filters) that
has kept them safe thus far? Why not find out by evolving the attack
ahead of the attackers? I have to believe long-term retention can be
enhanced, and users' behavior rendered more adaptable if we expose the
core attack through controlled mutation of cosmetic cues.

3) Phishers are generally not considerate enough to attack our clients
on days when our follow-up teams aren't already in the middle of some
other time-consuming work. So, follow-up waits. As a result, timespan
between event and remediation increases, client memory fades, lesson
is...not entirely "written to disk".

4) Phishers are generally not considerate enough to attack our clients
in such a way that the *clients* are not already in the middle of
something time-consuming when follow-up occurs. As a result, client is
distracted during remediation, performs the meatspace equivalent of
"clicking the OK button" until they can get back to work, lesson is not
entirely written to disk.

Maybe it is written to disk, but poorly indexed. Anyway...

I'm happy for the researchers to shoot down #1, and the professional
educators to debate the merits of #2. But my greater point lies in being
able to treat this as a project, rather than solely in an opportunistic
fashion. I'm not suggesting that we abandon natural opportunities as
they arise, but can't help thinking that acutely-targeted education has
a chance to contribute all the more effectively to overall security
improvement if conducted by *focused* staff and *juxtaposed very closely
with user action*. This requires time and planning. In other words: a
project.


There *might* be a couple of legitimate reasons but none IMHO outweigh
the damaged goodwill that others have mentioned.

Well, I continue to believe that the amount of damage to goodwill
suffered is 80% determined by prep (securing top management support,
setting appropriate expectations) and 10% by infosec's adroit
application of people skills (especially empathy) to the aftermath. I've
witnessed a lot of progress in infosec's dealing with the discomfort
caused by their bolder undertakings, in recent years.

*bah* This is probably Audit's job, anyway. ;-p


Cheers,

-sth

-- 
Sam Hooker | samuel.hooker () uvm edu
Systems Architecture and Administration
Enterprise Technology Services
The University of Vermont

Attachment: signature.asc
Description: OpenPGP digital signature


Current thread: