funsec mailing list archives

Re: oracle not only offeder - researchers NOT responsible?


From: Matthew Murphy <mattmurphy () kc rr com>
Date: Tue, 13 Dec 2005 01:32:12 -0600

-----BEGIN PGP SIGNED MESSAGE-----
Hash: RIPEMD160

Dear List,

I've only recently been made aware of this thread (and of funsec as a
list), courtesy of a heads-up from Gadi to the thread concerning my
recent SecuriTeam post.

As you may have noticed, I'm considerably more opinionated on
controversial issues than is typical of the SecuriTeam bloggers.  The
issue made for quite a wicked first post, and it's based more on
specific experience than most of the list readership may believe.

With that in mind, let's move to the specific points in Florian's post,
which is the only one that specifically rebuts the article.

* Gadi Evron:

He shows specific cases and vulnerabilities, and is worth a read.
Quite Refreshing and very informative.

http://blogs.securiteam.com/index.php/archives/133

Informative?  The part about "CERT" (I think CERT/CC is meant) is
misinformed at best because CERT/CC shares vulnerability information
with non-vendors without compensating the submitter:

[snip]

Yes, CERT/CC does, among other things, release vulnerability information
to third-parties.  I personally do not advocate this particular practice
of the center, and perhaps I should have made it more clear in my post
that my advocacy was not a holistic one.  I thought that my critique of
the relative toothlessness of CERT's 45-day disclosure policy was
sufficient for that purpose.

The Lynn incident shouldn't be generalized because if vendors and
security consultants (I hesitate to call them "researchers") are so
close, the results hardly ever end up on BUGTRAQ (or in some kind of
public advisory).  Shawn V. Hernan wrote at the beginning of 2002,
regarding responsible disclosure:

| The whole idea of disclosure at all is not universally accepted.

<http://jis.mit.edu/pipermail/saag/2002q1/000464.html>

And I doubt that much has changed.  Where are the advisories for SS7
switches?  Juniper routers?  SAP software?

Vendors, researchers... close?  You're joking... right?  Both tend to
treat dealing with the other as purely necessary evils.  Unfortuanately,
today's world doesn't leave much room for that approach to change in any
constructive fashion.

Let's face it: Most exploitation occurs after the issue has been made
public.  Whether the disclosure was coordinated in some way, or the
discoverer just went ahead and posted to BUGTRAQ, doesn't seem to make
a fundamental difference.  My own conclusion (which may well change
again 8-): Do not actively look for security bugs in _software_
(deployments are different matter; in practice, no one cares about
software vulnerabilities anyway), unless you aim for zero defects and
it's practical to achieve this goal.  Help the vendor to quietly fix
security bugs you have discovered accidentallly (if they aren't
completely clueless).

I have to disagree with the basis of your sentiment here.  No,
developers need not hear about every buffer overrun vulnerability to
avoid repeating the mistake, but that's a very small part of the overall
picture.

So much of security depends not only on having a picture of the threats
our networks face, but a *COMPLETE* picture.  Accurate study of the
impact and prevalence of vulnerabilities is important to
defense-in-depth strategies.  It wasn't the fact that buffer overruns
existed in Windows XP that prompted Microsoft to invest so much in
designing (fairly effective) XPMs for Service Pack 2.  Rather, it was
the sheer number of these vulnerabilities, and ultimately the resulting
worm outbreaks, that prompted these changes.

While it is true that most exploitation *today* occurs post-patch,
there's nothing to say that would remain true in a culture that were
more biased toward non-disclosure.  Look at the emerging motive for the
authorship and distribution of Windows malware: money.  Right now, it's
profitable for people to design malware that only targets systems that
are affected by publicly-reported vulnerabilities.  If the public
reports dry up, that source of profit goes away.  But do the malware
authors?  Don't bet your lunch on it.

What I think will happen if the public reports that today, drive the
majority of real-world exploitation were to dry up, is that we'd see
exploitation in advance of public reporting.  We've already seen it on a
few occasions, with the WebDAV exploits for IIS 5.0 being an example.
You're going to start seeing a lot more of it if malicious individuals
are denied a wealth of pre-reported, low-hanging fruit.

Granted, some of those who write and spread malware are kiddies who
would go away if the supply of publicly reported vulnerabilities were to
disappear.  However, some are certainly not.  It would be nothing short
of deluded to believe that exploitation will stop if public reports
stop.  It's even questionable if exploitation will abate.  But one thing
is for sure: you and I will be *LESS* prepared for it when it does happen.

Vendors won't be forced to toughen their products, and you and I won't
have less information and the new responsibility to do their job for them.

Do not create a self-fulfilling prophecy by
publicly labeling a vulnerability as particularly dangerous.  Downplay
it as much as possible when talking to the press.

Effective researchers will be able to discern for themselves, given any
level of technical detail (and some of them, with nothing more than the
ability to reverse-engineer a patch) the severity of a vulnerability.
That means it will take malicious users a few more hours to determine
which vulnerability their malware should target.  Not worth the loss to me.

I could probably spend the same number of hours of reversing patches,
but my time lost as a defender is exponentially disadvantageous to the
time loss of an attacker, as an attacker needs to attack one hole, while
I need to defend against any number of them on a given day.

Aside from that... how many admins do you know that can reverse a patch
and determine for themselves the relative severity of a vulnerability?

I respect the desire to put real solutions on the table, which is why
I'm an advocate of sensible disclosure.  I advocate neither pure full
disclosure nor non-disclosure because neither one is the most effective
option.  That said, pulling the wool over the collective eyes of the
community is not responsible at all... in fact, it's not even disclosure.

The vendors aren't particularly supportive, though.  If I report
something, I'm expected to do the hard work: keep the process going
(pinging the vendor from time to time if there's no progress), write a
clean exploit, deliver a list of affected products, etc.  Sometimes,
this is even more work than the actual fix, especially if regression
testing is automated and you can test the fix together with another
change which is already scheduled.

To say the vendors aren't "particularly supportive" is a massive
understatement.  I've been working on a resolution for a fairly serious
remotely-exploitable flaw in one vendor's product since August 3rd, and
there is still not even so much as a timetable for a vendor resolution.
 Most vendors aren't supportive at all.  Microsoft -- supposedly the
best in the business -- is no exception.

After some time, it might be a good idea to publicly document
vulnerabilities fixed in the past, so that developers can avoid making
the same mistake ("misuse cases").  But this is not necessary for each
and every buffer overflow vulnerability.

I don't find that true, for the reasons I documented above.
Particularly, the accuracy of statistical research for determining the
need for defense-in-depth measures.  It's as much a matter of
prioritization as anything.

In an abstract sense, I'm much in favor of full disclosure (mainly
because it keeps honest vendors honest), but at some point, you need
to concede to reality.

Reality is that vendors don't act unless their hand is forced.
CERT/CC's prior (non-)disclosure policy is an example.

For the record... I don't endorse US-CERT now, and I certainly didn't
then.  I just think that the relative strictness of the timeframe
standard they set deserves some mention.  It's obvious that vendors
choose to put off dealing with vulnerabilities more-often-than-not, and
they also choose to make their processes overly-burdensome for most
researchers.  If vendors' response processes were forced to be more
lean... it could certainly be done.

- --
"Social Darwinism: Try to make something idiot-proof,
nature will provide you with a better idiot."

                                -- Michael Holstein

-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1.4.2 (MingW32)
Comment: Using GnuPG with Thunderbird - http://enigmail.mozdev.org

iD8DBQFDnnj8fp4vUrVETTgRAw0/AKCbJVs8alrV4oEuq4FciF0czca3YACfWSgu
nYkaq+1oJzm/Y4W7uwegmns=
=Qgdq
-----END PGP SIGNATURE-----

Attachment: smime.p7s
Description: S/MIME Cryptographic Signature

_______________________________________________
Fun and Misc security discussion for OT posts.
https://linuxbox.org/cgi-bin/mailman/listinfo/funsec
Note: funsec is a public and open mailing list.

Current thread: