Educause Security Discussion mailing list archives

Re: Security metrics for small and community colleges


From: Alan Amesbury <amesbury () OITSEC UMN EDU>
Date: Tue, 29 May 2007 17:54:14 -0500

Mark Morrissey wrote:

Let me preface this by saying that it is my assumption that small colleges
and community colleges have fewer staff and other resources for developing,
analyzing and reporting security metrics (perhaps IT metrics of any kind).
If I am wrong in that assumption, please accept my apologies.

I think this is one of those things where size doesn't matter.  There
are edge cases, but as .EDUs we all have to comply with FERPA and pals.
 This, along with a sizeable population of traditionally creative (some
might say hostile) users, tends to make our "security lives" somewhat...
interesting.

I am relatively new to my institution and am bringing the information
security program up from scratch. As I start this program, it is clear that
we will need to baseline our security posture so as to be able to measure
and report both the effect of infrastructure changes on the security
posture, and report out to various stake holders the state of information
security in a manner that is meaningful to them.
[snip]

Recently I've been shifting focus to wrap my head around security
metrics, too.  While I haven't figured out ways to come up with
meaningful numbers here, I *have* figured out that there are a couple
things which do NOT look like good metrics.  First is ISO17799.  While
it appears to be a very thorough set of information security guidelines,
it seems more like an auditor's friend than anything else.  In
particular, it *is* focused on auditing, not metrics.  I mention this
because, in my reading, I've seen numerous references to ISO17799 as a
security benchmark.

For the second point, some background regarding what I think a "metric"
is needs saying.  I treat a metric as a measurement, something that
should be determined by objective, clearly defined criteria.  If there's
a point in the metric gathering process where human judgment is used and
it's possible for two different humans to give two different answers,
then it's not a metric.  So, as far as I'm concerned, metrics are
objective, consistent, simple, and reliable.  They're also hopefully
cheap to gather, process, tally, etc.  With that in mind.....

The other annualized loss expectancy.  In spite of its pleasant acronym,
I don't think ALE has any business in information security metrics
gathering.  While it's easy to calculate

        ALE = (Annual Rate of Occurrence) * (Single Loss Expectancy)


it's far too subjective.  Ask ten different people to calculate the SLE
of a virus on your organization and you'll probably get fifteen [sic]
different answers; it's too difficult to quantify.  The same is also
true of the rates of occurrence.  I don't think you can accurately
predict them, either.  Since ALE is determined by *multiplying* these
two wildly variable numbers together, it's no wonder it's practically
meaningless.  Until we have really good numbers for incident rates and
industry-accepted formulas for calculating losses, I don't think ALE is
suitable as a simple, consistent, objective measure.

Good luck with your findings, though.  I'll be keeping an eye on this
thread.  :-)


--
Alan Amesbury
OIT Security and Assurance
University of Minnesota

Current thread: