Dailydave mailing list archives

Re: Exploits matter.


From: security curmudgeon <jericho () attrition org>
Date: Thu, 8 Oct 2009 08:05:53 +0000 (UTC)


Hi Wouter,

: > Ten thousand or not, I cannot download the exploit from Immunity's web site,
: > milw0rm or anywhere else, correct? To me, and to OSVDB who tracks that
: > metric, that is flagged as 'rumored/private'.
: > 
: > Can our industry really put a numeric line on public vs private in the
: > scenario you describe? Do 9,999 CANVAS customers = private, but 10,000
: > CANAVAS customers = public?

: In the more formalized evaluation worlds (Common Criteria, EMVco, etc), the
: concept "public/private" is really just one input for the calculation of the
: cost to the attacker. In those terms, the costs in dollars over time would be
: something like this:
: * Prior to discovery that that part of the SMBv2 implementation had a
: potential place to attack: fuzzing+analysis, what about 6 person months (this
: is really a wild guess, anyone have a good number on this?), so say ~$60.000.
: * From discovery to CANVAS-integrated attack: 3 person months, so say ~$30.000
: * Now in CANVAS (and you'll get way more goodies with it): ~$4.000
: * When it will be on milw0rm etc: time to make it work properly, ~$200?
: 
: So the "public/private" discussion in my view is a discussion between a $4.000
: and a $200 attack, both of which I cannot understand that youwould call be an
: investment that is too steep for a real attacker.
: 
: Although I can understand from OSVDB point of view that they cannot confirm
: the status, it is disturbing that apparently people are using the uncertainty
: of the measurement (of the OSVDB in this case) to doubt whether it is in the
: $30.000 or $200 range.

Uh, either I do not fully understand your point, or I do not fully 
understand where you misunderstood how the economy works, at least in the 
U.S.

The difference betwene $US 30 and $US 200 is big. The difference between 
$US 30,000 and $200 is even bigger. I mention that because I believe your 
. in the 30.000 value really means 30,000, which is a huge difference to 
us Americans. Regardless of your use of . vs , anyone versed in math or 
economics could lecture on percentages of 30 and 200, or 200 and 30000.

As applies to OSVDB, if I am doubting our ability to intellectually track 
"public vs private", then the debate over the dollar value should not be 
very relevant, especially in a historic context.

: I'd expect that the vulnerability database crowd would have a "X claims 
: to have the exploit working, here is X's history of those claims so X 
: seems to be truthful in his claims"-construction to cover this. If so, 
: if it is easy to provide such info to the vulnerability database it 

Oh sweet jesus, no you didn't!

What you propose is based on what the VDB world calls "researcher 
confidence". Meaning, the VDBs actually track a history of disclosure for 
a given individual, track the vulnerabilities they publish, the severity 
of those vulns, the products they were found in, the number of third-party 
disputes, the number of vendor-disputes, and several other metrics, to 
calculate a 'resarcher confidence' score.

OSVDB actually began to track classification entries that help calculate 
such scores with "third-party verified", "third-party disputed", "vendor 
verified" and "vendor disputed". There was a slight method to our madness.

This has been in demand for more than five years, not only by OSVDB, but 
by specific people at CVE and other VDBs as well. For mostly political 
reasons, OSVDB has been slowly working toward the ability to track this 
metric while the others watched with anticipation. In the last few years, 
we have also begun to discuss and implement a framework for tracking 
*vendor confidence*. It isn't only researchers that are flaky and 
ill-equipped to handle vulnerability disclosure. =)

While beating my chest in primitive style, and hopped up on too many 
glasses of scotch, i'll go ahead and say that only OSVDB has really begun 
to address and implement such scoring mechanisms. Disclaimer: really smart 
guys from other VDBs have given input in our discussions, but cannot 
implement such a system on their own. Disclaimer #2: Even after all the 
discussion, we're left with a classic problem of deciding a system that is 
a perfect balance between 'over simplified' (and inaccurate) and 'overly 
complex' (and convoluted).

Long story short, we're working on a better way to classify both resarcher 
and vendor confidence scores. Even a small hint of that, has caused some 
notable panic in the *vendor* world, not the researcher world.

: would seem good marketing for vendors of tools like CANVAS to fill it 
: with "we can already do this". That way, someone researching the 
: potential vulnerabilities in his shiny new Windows server will find the 
: remark that SMBv2 could be a serious problem, and see the hint that 
: CANVAS could be used to check. Or is this too simple in the market 
: reality?

I'll go ahead and throw the gauntlet down to Dave and Immunity. OSVDB has 
discussed ways to better implement some of these metrics, specifically an 
overhaul to our classification system regarding exploit availability, as 
well as "vulnerability framework" vendors, free and commercial, to share 
with us a better understanding of what they have in their arsenal without 
giving away details that are a detriment to their commercial advantage.

(that last sentences makes me sick on one level, fascinated on another)

Would *any* vendor out there, developing exploits, give OSVDB a matching 
of CVE or OSVDB ID, along with their commercial capability to exploit the 
vulnerability? More specifically, not just a yes/no, but a date they had 
developed such an exploit? If the vendor was considered reliable (yes, 
we've been tracking to some degree in sekrit), we'd make such information 
a part of our database.

OSVDB would in turn break from tradition, and offer a link back to that 
vendor, under the 'Tools & Filters' section of our display. While it may 
seem rather unassuming, it would be the first time we did not link to a 
public/free resource, and fully demonstrate the capability and full 
arsenal your company had to offer. Hell, even giving us a 75% matching 
(say, delayed by 30 days?) would be a fascinating metric to track. 

So far, the only companies that have shown us a *hint* of this information 
are iDefense, Tipping Point and some random guy named Evgeny Legerov.

Bottom line, we'd love to track more information, develop more meaningful 
metrics and produce more relevant analysis. However, we're limited by the 
reliable dataset we have available.

: BTW, more philosophical: it does show the enormous cost decrease to the 
: attacker over time (~2-3 months calender time?), i.e. 0days custom 
: developed are orders more expensive, and the hidden cost of 
: "weaponizing" it is what tools like CANVAS solve for quite a cheap 
: price.

It does, you are right. But one thing keeps sticking in my mind. While 
companies like Immunity spend 30 days with X researchers to write a 
vulnerability, companies like Tenable, SAINT and Qualys are taking 2 - 5 
days to reverse and figure out a working vulnerability check. Not an 
exploit, but a way of determining exploitability, both locally and 
remotely. Those vendors have to be putting extra pressure on shops like 
Immunity or others, as they severely limit the window between 2-5 and 30 
days, as many high value targets patch their systems, before a working / 
weaponized exploit is developed. Yet another variable in determining the 
value or timeframes of everything discussed.




_______________________________________________
Dailydave mailing list
Dailydave () lists immunitysec com
http://lists.immunitysec.com/mailman/listinfo/dailydave


Current thread: