Educause Security Discussion mailing list archives

Re: Measuring security


From: Isac Balder <piis8 () YAHOO COM>
Date: Thu, 6 Nov 2008 06:38:42 -0800

Heather,

You state two completely different things, though they are related.
1.) Goals
2.) Metrics

Metrics can be used to measure the effort towards / accomplishment of a goal but metrics should not define the goal.


Ditto, on Gary's feedback, every well said.
A lot of metrics / reports, esp. those prepacked in a security system, rely on event, target, vulnerability counts.  
Pure top 10 counts are useless.

What hasn't been mentioned at this point is, do you have a risk analysis of your infrastructure?  The meaningful data 
used for your metrics needs to apply to something tangible.  Is your goal to close all security holes regardless of 
type or is your goal to reduce the number of critical security holes (ciritical being dependent on your specific 
situation)?
Once you have identified which servers and services are critical, what access there is to these objects, and how they 
rank in criticality / priority you can then develope a metric to measure the reduction in exposure / risk.  Even if you 
don't close the vulnerability due to required operations if you can document that you have a mitigation strategy to 
handle any potential attacks / breaches of that required service you at least have a solid metric for accepted risks 
with a managed action plan vs. accepted risks without.


A cookie cutter list of  X over Y metrics, though there are some that exist, will ultimately have to be tweaked and 
modified for each entity that uses them.  Most I have seen on-line relate directly to specific regulatory issues.


I.B.




--- On Wed, 11/5/08, Basgen, Brian <bbasgen () PIMA EDU> wrote:

From: Basgen, Brian <bbasgen () PIMA EDU>
Subject: Re: [SECURITY] Measuring security
To: SECURITY () LISTSERV EDUCAUSE EDU
Date: Wednesday, November 5, 2008, 5:56 PM
Gary has a great answer for data security, and I think it is
contingent on 'meaningful' numbers. Trying to
encapsulate the raw dump of a Nessus scan isn't going to
be a successful endeavor. Vulnerability analysis, for
example, should occur after scans have been vetted and some
remediation efforts have occurred. Thus, among your data
points are X vulnerabilities remediated for Y detected at Z
time.

 FWIW, on the information security side of the house, the
"metrics" discussion falls into the realm of risk,
which can be qualified, measured, trended over time, etc.

~~~~~~~~~~~~~~~~~~
Brian Basgen
Information Security
Pima Community College



From: The EDUCAUSE Security Constituent Group Listserv
[mailto:SECURITY () LISTSERV EDUCAUSE EDU] On Behalf Of Gary
Dobbins
Sent: Wednesday, November 05, 2008 3:35 PM
To: SECURITY () LISTSERV EDUCAUSE EDU
Subject: Re: [SECURITY] Measuring security

I recommend measuring vulnerabilities on campus systems, as
measured by a network vulnerability analyzer (or a skilled
assessment team for non-network pen-testing), along with
measuring campus audience awareness.  Why?


1)      They are direct predictors of a security
"event" happening

2)      We can directly influence them (determined by
money, support, time, etc)

3)      The tools which measure them are relatively stable,
yet evolve as threats change, and are generally used more
places than just here, so we're not an island of data

4)      An increase in the vuln count, when it happens, is
due either to increased threats, or decreased vigilance.
Both of these, and what can (or cannot) be done about them
are readily understood by a non-technical audience

5)      A decrease in the count of vulns, and/or increase
in awareness, means risk is going downward

Counting attacks, conversely, can be very misleading or
easily misunderstood.  It indicates any one of several
things, many of which are outside your control (e.g. threats
evolving, smarter attackers, etc).
Counting the presence of fortifications (like, systems
patched, antivirus deployed), likewise, tells you what
you've built, but is only an indirect indicator of the
likelihood of an incursion. (e.g. "look how tall our
wall is!" sounds nice - except if the back gate is
hanging open)

I'm not saying these latter counts aren't also
useful, just that as executive metrics (where brevity is
very much the soul of wit) they are too hard to explain
meaningfully, and their trend can be confusing.  But as
indicators read by a security professional or a CIO they are
very meaningful in their own right.



From: The EDUCAUSE Security Constituent Group Listserv
[mailto:SECURITY () LISTSERV EDUCAUSE EDU] On Behalf Of Heather
Flanagan
Sent: Wednesday, November 05, 2008 5:06 PM
To: SECURITY () LISTSERV EDUCAUSE EDU
Subject: [SECURITY] Measuring security

Hi all -

I've been asked to create some measurable target goals
for data security.  This is proving to be a tricky set of
metrics to define!  What I've realized so far is:

1 - trying to go by how many holes or warnings are found by
nessus won't work; way to many false positives
2 - trying to go by what a third-party penetration test
might find won't work; what they are measuring varies
too much and there have so far been way too many false
positives or things we considered completely acceptable
(yes, a domain controller is going to act as a time server
to anyone who checks)
3 - trying to go by "well, doesn't look like
we've been hacked recently"...  not quite the
business metric I'm looking for

Is anyone out there finding any particular set of metrics
working for you and your campus leadership?
Heather Flanagan
Director, System Administration
heatherf () stanford edu<mailto:heatherf () stanford edu>




Current thread: