Firewall Wizards mailing list archives

Re: Flawed Surveys [was: VPN endpoints]


From: "Paul D. Robertson" <paul () compuwar net>
Date: Wed, 1 Sep 2004 14:23:42 -0400 (EDT)

On Wed, 1 Sep 2004, Marcus J. Ranum wrote:

So long as they're flawed approximately the same way from survey to
survey, they're often both "better than nothing[1]" and a good relative
metric.

Sorry, but you're completely wrong about that.

I've often used the results of non-randomized, non-blinded surveys to
approximate my risk.  It's often worked well.  Just because it can go
wrong doesn't mean it has to.

One of the major problems we keep running into in this industry is the "if
it isn't perfect, it's not good enough" syndrome.

The reason is because if you have a survey of unknown bias, you
can't assume that the bias does not change because of other factors,
because the bias is unknown. In other words, unless you know how
wrong it is and why, you can't be sure it's wrong the same way
twice.

If you know the field, and you know the "normal" responses, then you know
enough to be able to use the data, depending on the survey, respondents,
etc.

We often don't need absolute metrics, relative metrics will do
just fine.

Be careful; polls are opinion measures, not metrics. Metrics would
be if you were (for example) pulling actual data from corporate
financials regarding security expenditures. Measuring someone
who claims to be CIO's opinion about what their expenditures
either {are|should be} is not even good enough to give a relative
metric.

Symantics aside, relative numbers are often "good enough"- and they get
better when you can validate them against the data you do have.  For
instance, if I ask a bunch of AV admins if this year was better or worse
than last year for them outbreak-wise, I can check the number of wild
viruses for the time period, the number of incidents, and actual outbreak
numbers at a few places and say the number of signature updates three AV
companies did and correlate that data to the poll responses to see if
things are out of whack.

What I think you're saying, unfortunately, is "having some 'gee wow'
numbers is good enough to blow some basic FUD and we need
basic FUD so it's OK."

No, what I'm saying that polling works for some sets of things, and while
you might have a larger margin of error, that doesn't mean we should just
throw it out.

 I know what my $foo risk was last year, and I know what it was
the year before, and I can compare to the survey and see the relative
differences and the relative change- therefore, I can figure out my
approximate relative change for this year.

But that's the problem. You don't actually "know" anything. You

Sure I do, I know MY data, I know what the poll reported, and I know what
the poll questions were.  I also know my answers to the poll.  If it's my
poll, and it's not blind, then I also know how the people who answered
last time answered this time too.

have some information that is based on a self-selected sample
which I guarantee you will change next year. Different people
will be bored enough to answer the survey, and the answers
they give will be either more or less misinformed than they
were last year. There are no constants *whatsoever* in these
surveys.

There's still a mean answer, and there's no indication for some things
that the amount of clue changes that drastically in respondents to most
polls.

Now, if you said you were going to take the same self-selected
sample and poll those same people next year, you're starting
to apply some controls to your survey, but they're still not going
to be good enough to give you a result worth having.

"Worth having" is in the eye of the beholder.  If my margin of error is
30%, I'm still in the ballpark.

The weatherman can't tell you exactly what the temperature will be
tomorrow, but he can get in the ballpark, and that's good enough to plan a
picnic with most times.

        - How much the person cared about the topic (motive to respond)
        - How honest the respondent is (hard to verify)
        - Other factors (hard to predict)

You can also (a) drop outliers

You can't drop outliers because, since you actually know nothing
about your data's provenance, you don't know what an "outlier"
is when you're dealing with a self-selected sample. You might,
for example, discard the survey response from the one *REAL*
CIO who answers the survey! You simply do not know.

Bull.  Statistics will give you the outliers.  Sampling a known quantity
can give you the outliers.  Sure, you *could* have 99% of the respondents
lie- but that's not _likely_.

You don't think you can back-check the likely IT budgets of companies of a
representative size?  These numbers don't *need* to be down to the tenth
of a cent (in your specific example, the number's for marketing anyway!)

What you're trying to do is apply science to pseudoscience. The
result is comparable to polishing a turd: if you work at it hard enough,
it still won't get shiny.

No, what I'm saying is that polls can be "good enough" to use within
reason when you don't have the exact data.  Lots depends on the poll, the
data's use and what ancillary data you have to check against.

, (b) have cross-conflicting questions

That simply measures consistency in response; not whether it is
truthful or whether your sample is biassed.

Yep, and if someone's trying hard enough with enough people, they can skew
those results.  But for most people filling out bingo cards, that's more
work than it's worth to them.

(c) answer the questions on behalf of a known quantity and still be able
to validate polls pretty well.  You obviously don't get people who don't
care to respond, but if the number of people who do respond is
significant, that's ok.

NO IT IS NOT OK!

Sure it is- as long as you're after trend data, not exact numbers.

Let's say the question is "Has IT spending in widget shops gone up?"  If
you answer that on behalf of known quantities in an industry, and it has
for 850 of 1,000 of those companies, and 90% of 50,000 answers to a poll
all say that it has, you can be pretty sure the poll data is right.  If
your answers on behalf of a known quantity don't match the poll results,
then you can call them into question.  If an additional 20,000 widget
manufacturers don't answer the poll, it's ok.

I am sorry, Paul - if you believe the statement you made above, you
really really really need to read a few introductory texts on statistics,
the scientific method, and research methods. Your statements above
amount, to a trained statistician, as comparable to a declaration
that not only is the earth flat, but it rests on the back of a turtle.

Trained statisticians do polls too, believe it or not.

I wasn't originally aiming my rant at Paul (I seem to be ranting
at my buddies a lot these days...) but it is exactly the kind of
tolerance of pseudo-science that Paul is advocating above
that keeps security a "social science" rather than something
measurable or quantifiable. Security practitioners are on the
verge of understanding that we need to sell security in terms
of ROI and risk, and it's just BEGINNING to sink in that
risk requires real metrics and statistics. But we're still stuck
with a lot of pseudo-science.

I'm sure nobody on this list has ever filled out one of those surveys
from a magazine in which they asked you your job position, whether
you were a decision-maker, company size, etc...  And I'm sure you
all fill them out EXACTLY right. I used to enjoy periodically asserting
that I was the CEO of a 1 person company with a $4,000,000 IT
budget (well, a guy can dream, huh?)     Unfortunately, sometimes

You're out of the range of the mean by orders of magnitude, anyone doing
it even half-way should be throwing that response away (assuming they
*want* correct data,)

ARRGH!! NO! NO MORE PSEUDO-SCIENCE!
YOU ARE HURTING MY BRAIN!!!!!!  MY HEAD IS
GOING TO EXPLODE!!!

Paul, if you are a scientist and you measure data, and then
decide to throw away values that don't match your expectations,
that's called "experimental fraud"!!  That's um, bad!

We _aren't_ measuring data in polling, we're tabulating other people's
answers.  For that, throwing away the outliers is pretty common practice.
If you can't validate your measured data, it's still good practice for
lots of things that do deal with measurement.
(sorry, correct was a bad choice of words, representative is better.)

See, the problem is that you can't a priori decide you know
what your mean _is_ until you know what your data is. So
what if 50% of your self-selected sample all were feeling
frisky that day and entered bogus figures? How _many_
values around the mean will you throw away until you get
a number that "feels right"???    That's how psychic researchers

Of course you can't throw away the outliers until you get all the data,
just like when you design a test, you can't throw out the "bad questions"
until you get a group of people to answer them, then compare all the
answers.  Could all your test subjects do the test wrong?  Sure!  Does it
happen?  Not very often, and almost every multiple choice test is designed
that way.  Now, that doesn't say multiple choice tests are the *best* way to do
*anything*- just that it's an accepted method of validating the test's
questions.

Now, if you get a high rate of bogus results, it could be in your
questions, or the interpretation thereof.  For instance CIOs who think
they're spending the right amount on security is more likely a measurement
of how well they think they're doing than it is how well the company
throws money at the relevant issues.

get their results: they know what they want to find and throw
away data until it "feels right"???

Which is way different than throwing out outliers.  I've just spent lots
of months with a statistician going over 4 years worth of (mostly
measured) data, and in almost every case where one particular metric went
outside some number of standard deviations, it was because the data was
bad (for limited data sets, it was because there was a significant event.)

Sure, it'd be *better* and *more accurate* to build data collection
systems from the ground up, measure everything over a four year period,
then tabulate the results.  But you'd have had to have started four years
ago to do that today.  That doesn't mean you can't get good answers
validating what data you do have, then using the stuff that validates.

There is no amount of compensating controls you can use
to polish a turd into a useful result. And, more importantly,
at a certain point, the cost of polish exceeds the cost of
doing it right in the first place!!

There are things we can't measure, for that, we need what we can get, and
we need to take what we can measure and see if it relates.  Neither of
those is perfect, but we have what we have.

        - "How to Lie with Statistics" - Darrell Huff
                ISBN: 0393310728

Owned it for years, 94.2% of it is good.

Paul
-----------------------------------------------------------------------------
Paul D. Robertson      "My statements in this message are personal opinions
paul () compuwar net       which may have no basis whatsoever in fact."
probertson () trusecure com Director of Risk Assessment TruSecure Corporation
_______________________________________________
firewall-wizards mailing list
firewall-wizards () honor icsalabs com
http://honor.icsalabs.com/mailman/listinfo/firewall-wizards


Current thread: