Firewall Wizards mailing list archives
Re: Re: Flawed Surveys [was: VPN endpoints]
From: "Bruce B. Platt" <bruce () ei3 com>
Date: Wed, 01 Sep 2004 15:48:27 -0400
Please excude me for not responding in-line as my comments don't fit well as I am going to tell a story.
1. I have to agree with Marcus.2. Long ago, when I was a kid, I was lucky enough to get my Ph.D. in what was then called Experimental Psychology from Cal. We got the scientific method absoutely beat into us until understanding it was "first" nature. The argument going on here revolves completely around what is good science and what is not.
Two good references: the first is an overview of the "scientific method" http://phyun5.ucr.edu/~wudka/Physics7/Notes_www/node5.html The second is the classic text: Thomas Kuhn's "The Structure of Scientific Revolutions" ISBN: 0226458083 Simply put to do good science: 1. Develop an hypothesis2. Develop a plan to test it making sure you clearly state the "null hypothesis" and operationally define what you plan to measure.
3. Determine a sampling plan.4. Keep all the data unless you have a good reason for trhowing out a piece that has nothing to do with with data value, but rather might have something to do with an objectively detached evaluation as to whether the data value resulted from the test you thought you applied to the sample of the population you chose.
5. Apply the statistical analysis techniques you decided on in step 2 above.6. Resist the temptation to use other statistical techiques unless you are ready to start over at step 1 above.
Whatever you do is only as good as your starting hypothesis, the operational definitions which you create, and your experimental techniques.
A personal digression to illustrate the above.My dissertation was on the subject of visual influence on auditory localization judgements. I asked people to judge where a sound came from based on various manipulations of what they saw. I ran this reasearch during the early '70s when some students were in a permanent state of "buzz". My committee and I agreed in advance that the data from any subject who indicated that they were in a state of "buzz" would be excluded. Our justification was that they were not part of the normal human population in that they were influenced by a weed which had known and well understood perceptual effects. It was acceptable research paractice to distinguish that they were part of the "buzzed" human population which was quite large at the time, but data derived from them didn't apply to the research, because the sample then wouldn't match the original sampling plan which was meant to apply to the human population at large.
It would have been equally valid to develop an hypothesis about how buzzed perceptions differed from non-buzzed perceptions, but that's not what I set out to study -- idiot that I was at the time. :-)
As we used to say in the academic world in those days: "It is left as an exercise for the reader to find the relevance of the above anecdote to the issue being discussed."
Regards, and thanks for allowing me an opportunity to use something from my first career to refelct on this current one.
BrucePS. You can certainly chose not to post this to the list if you wish. Just writing it was fun enough for me.
Marcus J. Ranum wrote:
Paul D. Robertson wrote:or the CIO magazine survey on security) - a lot of these surveys are fundamentally flawed. They yield results but it's hard to say whattheresults actually _measured_.So long as they're flawed approximately the same way from survey to survey, they're often both "better than nothing[1]" and a good relativemetric.Sorry, but you're completely wrong about that. The reason is because if you have a survey of unknown bias, you can't assume that the bias does not change because of other factors, because the bias is unknown. In other words, unless you know how wrong it is and why, you can't be sure it's wrong the same way twice.We often don't need absolute metrics, relative metrics will do just fine.Be careful; polls are opinion measures, not metrics. Metrics would be if you were (for example) pulling actual data from corporate financials regarding security expenditures. Measuring someone who claims to be CIO's opinion about what their expenditures either {are|should be} is not even good enough to give a relative metric. What I think you're saying, unfortunately, is "having some 'gee wow' numbers is good enough to blow some basic FUD and we need basic FUD so it's OK."I know what my $foo risk was last year, and I know what it was the year before, and I can compare to the survey and see the relative differences and the relative change- therefore, I can figure out my approximate relative change for this year.But that's the problem. You don't actually "know" anything. You have some information that is based on a self-selected sample which I guarantee you will change next year. Different people will be bored enough to answer the survey, and the answers they give will be either more or less misinformed than they were last year. There are no constants *whatsoever* in these surveys. Now, if you said you were going to take the same self-selected sample and poll those same people next year, you're starting to apply some controls to your survey, but they're still not going to be good enough to give you a result worth having.- How much the person cared about the topic (motive torespond)- How honest the respondent is (hard to verify) - Other factors (hard to predict)You can also (a) drop outliersYou can't drop outliers because, since you actually know nothing about your data's provenance, you don't know what an "outlier" is when you're dealing with a self-selected sample. You might, for example, discard the survey response from the one *REAL* CIO who answers the survey! You simply do not know. What you're trying to do is apply science to pseudoscience. The result is comparable to polishing a turd: if you work at it hard enough, it still won't get shiny., (b) have cross-conflicting questionsThat simply measures consistency in response; not whether it is truthful or whether your sample is biassed.(c) answer the questions on behalf of a known quantity and still beableto validate polls pretty well. You obviously don't get people whodon'tcare to respond, but if the number of people who do respond is significant, that's ok.NO IT IS NOT OK! ________________ I am sorry, Paul - if you believe the statement you made above, you really really really need to read a few introductory texts on statistics, the scientific method, and research methods. Your statements above amount, to a trained statistician, as comparable to a declaration that not only is the earth flat, but it rests on the back of a turtle. I wasn't originally aiming my rant at Paul (I seem to be ranting at my buddies a lot these days...) but it is exactly the kind of tolerance of pseudo-science that Paul is advocating above that keeps security a "social science" rather than something measurable or quantifiable. Security practitioners are on the verge of understanding that we need to sell security in terms of ROI and risk, and it's just BEGINNING to sink in that risk requires real metrics and statistics. But we're still stuck with a lot of pseudo-science.I'm sure nobody on this list has ever filled out one of those surveys from a magazine in which they asked you your job position, whether you were a decision-maker, company size, etc... And I'm sure you all fill them out EXACTLY right. I used to enjoy periodicallyassertingthat I was the CEO of a 1 person company with a $4,000,000 IT budget (well, a guy can dream, huh?) Unfortunately, sometimesYou're out of the range of the mean by orders of magnitude, anyonedoingit even half-way should be throwing that response away (assuming they *want* correct data,)ARRGH!! NO! NO MORE PSEUDO-SCIENCE! YOU ARE HURTING MY BRAIN!!!!!! MY HEAD IS GOING TO EXPLODE!!! Paul, if you are a scientist and you measure data, and then decide to throw away values that don't match your expectations, that's called "experimental fraud"!! That's um, bad! See, the problem is that you can't a priori decide you know what your mean _is_ until you know what your data is. So what if 50% of your self-selected sample all were feeling frisky that day and entered bogus figures? How _many_ values around the mean will you throw away until you get a number that "feels right"??? That's how psychic researchers get their results: they know what they want to find and throw away data until it "feels right"??? There is no amount of compensating controls you can use to polish a turd into a useful result. And, more importantly, at a certain point, the cost of polish exceeds the cost of doing it right in the first place!! Reading list: - "How to Lie with Statistics" - Darrell Huff ISBN: 0393310728 - "Research Design and Methods" (4th ed) Bordens and Abbott ISBN: 0767421523 - Richard Feynman's article on experimental controls and their mis-application in social "sciences" from "the pleasure of finding things out" (I think it's that book..)mjr.
_______________________________________________ firewall-wizards mailing list firewall-wizards () honor icsalabs com http://honor.icsalabs.com/mailman/listinfo/firewall-wizards
Current thread:
- Re: VPN endpoints Kevin Sheldrake (Sep 01)
- <Possible follow-ups>
- Re: VPN endpoints Paul D. Robertson (Sep 01)
- Re: Flawed Surveys [was: VPN endpoints] Marcus J. Ranum (Sep 01)
- Re: Flawed Surveys [was: VPN endpoints] Paul D. Robertson (Sep 01)
- Re: Flawed Surveys [was: VPN endpoints] Marcus J. Ranum (Sep 01)
- Re: Flawed Surveys [was: VPN endpoints] Paul D. Robertson (Sep 01)
- Re: Re: Flawed Surveys [was: VPN endpoints] Christopher Hicks (Sep 01)
- Re: Flawed Surveys [was: VPN endpoints] Marcus J. Ranum (Sep 01)
- Re: Re: Flawed Surveys [was: VPN endpoints] Bruce B. Platt (Sep 01)
- RE: Re: Flawed Surveys [was: VPN endpoints] Tina Bird (Sep 01)
- Re: Re: Flawed Surveys [was: VPN endpoints] Bruce B. Platt (Sep 01)
- RE: Re: Flawed Surveys [was: VPN endpoints] Tina Bird (Sep 01)
- Re: Re: Flawed Surveys [was: VPN endpoints] Bruce B. Platt (Sep 01)
- Wired article on the scientific method Tina Bird (Sep 01)
- Re: Re: Flawed Surveys [was: VPN endpoints] Paul D. Robertson (Sep 01)
- Re: Re: Flawed Surveys [was: VPN endpoints] Crispin Cowan (Sep 01)
- Re: Re: Flawed Surveys [was: VPN endpoints] Adam Shostack (Sep 03)