Bugtraq mailing list archives

Re: Funny article


From: Javier Fernandez-Sanguino <jfernandez () germinus com>
Date: Tue, 18 Nov 2003 18:50:28 +0100

Steven M. Christey wrote:

It would be very interesting to see any results that try to compare
the timeliness of vendor response.  I attemped to conduct such a
study

I would be too.

a year and a half ago, but the study failed due to lack of time and a
 lot of other factors such as:
(...)

Well, it can be difficult to do compare these times without a common
criteria. From personal experience, I think it is quite possible to
relate some data (date the advisory is release and date the bug was
published) and extract some conclusions. You might have issues with some
specific bugs but, I think, the effects you described are diminished if
you take a bird's view (and cover several years).

- how one "counts" the number of vulnerabilities.  I view this as one
 of the main roles of CVE, however any set of vulnerability data must
 be normalized in *some* fashion before being compared, otherwise the
 stats will be biased.

When I have done this kind of analysis and comparisons I've always used
CVE as the vulnerability count as well as the way to relate data from
different sources (including vulnerability databases such as ICAT and
vendor advisories for example).

(...)

I initially tried to cover a 3 month time span, but it really seemed
 like at least a year's worth was required.

IMHO the more data you use, the less the effects you described affect
the statistical values you will get.


You can't simply compare published advisories against each other,
because:

After all, if they refer to the same CVE names, why not?

- different vendors have varying criteria for how severe a bug must
be before an advisory is published

That's not an issue, that is, after all, a vendor decission. However,
bug severities could be determined by independent parties or through a
common formula (see the EISPP Common Advisory Format Description I
pointed to in a previous mail). The raw "time to fix" data could be
partitioned by severity but, still, it could be a useful metric to
compare between vendors.

- some advisories report multiple bugs, which could mean multiple
disclosure and notification dates, and different times-to-fix

If advisories refer to different CVE names I don't see that as an issue
either, it just means that different vulnerabilities were fixed at the
same time.

- sometimes an interim patch is provided before the advisory

That is also a non-issue. Many security bugs can be fixed by appropiate
configuration (or de-activation) of the operating system (i.e.
procedures described in security manuals). Some interim patches just say
'remove the executable' which is not really a fix. If your target is to
determine when a bug gets fixed you should only take into account when
it does really get fixed.

- sometimes security issues are patched through some mechanism other
 than an advisory (e.g. Microsoft's service packs, which fix security
 bugs but don't normally have an associated security bulletin)

Which is not the best way to distribute security patches since
researches (or IT managers) cannot really know what security bugs are
fixed by what update.


- sometimes there are multiple advisories for the same bugs (SCO and
 Red Hat immediately come to mind)

Many vendors do this. In many cases they release a new version of the
patch fixing issues with it and still refer to the previous
vulnerability. I agree this is a difficult problem since the question
arises: when was the bug really fixed? (in the first advisory or in the
advisories published after it)

You also can't directly compare by "total bugs per OS" because of the
 variance in packages that may or may not get installed, plus how one
 defines what is or isn't part of the "operating system" as mentioned
 previously.  One way to normalize such a comparison is to compare
"default" installations to each other, and "most secure"
installations to each other - although of course the latter is not
always available.

That's why should compare "similar" operating systems. Say, different
Linux distributions, or different propietary UNIX systems, which provide
similar variance of packages and, in many cases, default installations.
Comparing, for example, Microsoft and any UNIX/Linux operating system is
comparing apples to oranges, it might be a good PR move but it seldoms
provide any useful information. For a more thorough discussion I find
point 3, section 4 (Security) in David Wheeler's "Why Open Source
Software / Free Software (OSS/FS)? Look at the Numbers!" [1], very
enlightening.

Fortunately, the percentage of vulnerability reports with disclosure
 timelines seems to have been increased significantly in the past
year, so maybe there is a critical mass of data available.

I think there is sufficient data to do proper analysis of
vulnerabilities from many vendors since many already provide accurate
CVE mappings. Microsoft, unfortunately, is not one of those.


As a final note, I have the impression that most vendors (open or
closed, commercial or freeware) don't track their own speed-to-fix,
and *no* vendor that I know of actually *publishes* their
speed-to-fix.


That is actually not really true. Debian does this already,
read the FAQ item "How much time will it take Debian to fix
vulnerability XXXX?" in the Security Manual [2]

I've actually tracked the information for Debian this year and published
the results as part of the Debconf3 conference "Security
in Debian: Food for thought and Discussion" [3]

Hopefully someday there will be a solid public paper that actually
tries to quantify the *real* amount of time it takes to fix bugs,
whether on a per-vendor or per-platform basis, and accounts for all
the issues that I described above.  (I know of a couple private
efforts, but they are not public yet.)  Of course, one would want to
 verify the raw data as well.

I also think that this a scientific paper on this issue is long overdue.
It could maybe help clarify some misconceptions, which would be helpful
considering the way this leads to FUD being published by vendors. The
only (outdate) article I've seen which tries to (slightly) address this
issue providing, at the same time, the raw data it used is
"Linux vs. Microsoft: Who Solves Security Problems Faster?" by Jim
Reavis for SecurityPortal [4]. Of course, it takes the Linux vs.
Microsoft stanza which is not the best way to do it, but, still, is much
better than many other FUD articles I've seen out there.

Best Regards

Javi


[1] http://www.dwheeler.com/oss_fs_why.html#security
[2]
http://www.debian.org/doc/manuals/securing-debian-howto/ch11.en.html#s-debian-sec-
team-faq
[3] Raw data and slides available at
http://people.debian.org/~jfs/debconf/security/, see specifically slide
#8 and #9.
[4]
http://web.archive.org/web/20010608142954/http://securityportal.com/cover/coversto
ry20000117.html


Current thread: