Firewall Wizards mailing list archives

The State of Information Security, 2004 (survey)


From: "Marcus J. Ranum" <mjr () ranum com>
Date: Sun, 05 Sep 2004 20:43:16 -0400

Speaking of "Bad Surveys" here's a classic. This just came
across the radar screen this evening....

I've chopped huge hunks out and added my own snotty remarks
inline to try to illustrate some of the things I was talking about
in my rant last week.

The State of Information Security, 2004  
By Lorraine Cosgrove Ware 
Executive Summary 

Our second annual Global State of Information Security study, conducted in
partnership with CIO Magazine, CSO Magazine and PricewaterhouseCoopers,
found that while companies’ security resources did not grow vastly year to
year, their security practices improved overall.

[...]
This is a great example of how damaging pseudo-science can
be. Did the Executive Summary present this as "our statistics
indicate"? or even "surveys imply"?  No. They present a result
based on shoddy statistics as indicating a *fact*.  

[...]

Methodology 

The State of Information Security 2004, a worldwide study by CIO Magazine
and PricewaterhouseCoopers, was conducted online from March 22 through
April 30, 2004. Readers of CIO Magazine, CSO Magazine and clients of
PricewaterhouseCoopers from around the globe were invited via email to take
the survey. The results shown in this report are based on the responses of
more than 8,000 CEOs, CFOs, CIOs, CSOs, vice presidents and directors of IT
and information security from 62 countries. The margin of error for this
study is 1%. 

OK, this is the important part, here. Can you say "self selected sample"??
Readers were emailed a survey and some of them answered. What have
we measured, here?

One possible thing we have measured is the number of IT executives
that have spam-blockers. ;)  Or we have measured the number of IT
executives who have too much free time on their hands... Or - well,
we don't KNOW - that's the problem with self-selected samples.
Of particular interest in the description above is the "clients of
PricewaterhouseCoopers from around the globe" - what does that
mean? Are they executives, or software engineers or sales reps
or - what? Again, we don't know. But the premise and tone of the
survey makes it sound like it's a scientific survey of senior executives;
reading the "Methodology" makes one wonder if that's the case. My
guess is even the folks who did the survey have very little actual
idea what the sampling bias was, here.

"Margin for error for this study is 1%" - margin for error is a
statistical concept in which you (assuming a normal distribution)
predict the likelihood that the sample falls outside of the larger
population as a whole. It's a very powerful technique, but it
only can be applied effectively when you're dealing with a
good sample - a *random* sample. The fact that the folks
who did this survey appeal to "margin for error" indicates
that they either a) don't know what they are doing or b) know
that their sample is invalid and are trying to "dress it up"
a little bit. My money is on "a" but I have no way of knowing.


The study represents a broad range of industries including consulting and
professional services (13%), government (10%), computer-related
manufacturing and software (9%), financial services/banking (9%), education
(7%) and healthcare (5%). 

This is based on the respondents' responses - in other words,
they bucketted themselves into those groups. There are lots
of ways this could result in serious methodological errors, since
the respondents might not have had adequate guidance to
determine where they should bucket themselves.

More significantly, look at the breakdown of industries
and ask yourself, "does that represent a fairly accurate
mapping to the industry market?"  If we were dealing with
an unbiassed sample, we'd expect a random (or near-random)
distribution. Hm... Do you think 13% of the industries in the
world are professional services?? Or do you think some
sampling bias may have crept in? I'm being sarcastic, in
case you can't tell: this is a strong indicator that the sample
set shows severe (though we can't say anything about its
meaning) bias.

[...]

In terms of title, fifty-eight of the respondents held IT titles including
CIO, vice president, director and manager, and executives while 11% were
information security professionals. Fourteen percent of those surveyed held
CEO, CFO or non-IT director titles while17% listed “other.” 

This breakdown is also interesting!!

In a paragraph above, the authors of the article write:
"The results shown in this report are based on the responses of
more than 8,000 CEOs, CFOs, CIOs, CSOs, vice presidents and directors of IT
and information security from 62 countries"
Which sure makes it sound like the survey respondents were execs
and bigwigs. But then in terms of title at the bottom they disclose
that 58 out of the respondents held IT titles. In other words, 
0.7% (that's 7 in a thousand!) were CEOs, CFOs, CIOs, CSOs
etc... -- a bit of a contradiction. In fact, since 17% of the respondents
were "other" titles there's a clear internal contradiction between
what the survey's numbers and its own executive summary claim.

What can we conclude from this survey? *NOTHING*. We can
conclude exactly nothing; yet the Executive Summary makes
what appear to be statements of fact, namely "security
practices improved overall."   We're all familiar with the
expression "Garbage in, garbage out" - this survey is a
prime illustration of that adage. These guys would have earned
an 'F' in any undergraduate level stats class, but only because
scores don't go lower. And this is one of the *BETTER*
surveys I've seen in Infosec bogo-statistics.

I need to emphasize: I do not have anything against the people
who produced this survey. I actally *WISH* that this survey
were valid, were well-constructed, and were unbiassed. I would
*LIKE* to have the information this survey purports to have
collected. That's what's so pernicious about this kind of nonsense.
It's just believable *ENOUGH* to get people to take it seriously.
It's just believable *ENOUGH* that it can be used to influence
purchases, tighten the sales cycle, sell consulting, or whatever.
So people are going to latch onto it, like they do other flawed
surveys, and forward it to their bosses and say "see we need
to spend {more|less|whatever}" - thus bullsh*t gets propagated.

This survey, and others like it, could be done right for only about
ten times as much effort. Personally, I think the effort would be
worthwhile. We'd need some organization like ACM to step
forward and *require* its membership to participate in a
controlled survey, from which a random subsample could
be drawn. We'd need actual verification of the results, to
see if the respondents were answering accurately or
clowning around. Etc. None of this is rocket surgery; it's
just harder and requires more discipline than practicing
statistical proctology does.

mjr. 

_______________________________________________
firewall-wizards mailing list
firewall-wizards () honor icsalabs com
http://honor.icsalabs.com/mailman/listinfo/firewall-wizards


Current thread: