Dailydave mailing list archives

RE: Microsoft silently fixes security vulnerabilities


From: Ari Takanen <art () codenomicon com>
Date: Wed, 19 Apr 2006 15:42:25 +0300

Hello all,

Are you sure you want to do risk assessment for all the thousands of
security flaws that e.g. our robustness testing tools can find? Do you
want to add filters and protections for all the millions of attack
simulations that fuzzing tools can generate? Can you protect against
e.g. all the attacks that PROTOS tools simulate?

I know this is an old topic and has been covered here and in all
security related emaillists tens of times, but I think it is important
for the readers to understand that different requirements (and sales
pitches) that are given by different security solution providers. It
is important to understand difference between reactive and proactive
security. There is also a misunderstanding between people on the
meaning of "window of vulnerability", as some people understand it to
start from the disclosure, whereas us "proactive" security people
think it starts from the introduction of the programming flaw.

Now studying a life-cycle of a vulnerability, we have different risk
levels:

programming -> release: risk none

Most security problems are created during programming. They are QA
flaws. Most problems should also be fixed here, and this is the main
target for proactive tools and good quality assurance practices, but
as almost everyone knows, there are very few vendors who do this. All
mistakes found here usually also impact earlier releases of the
product. Solutions are robustness testing (i.e. fuzzing) tools like
ours, and code auditing tools.

release -> deployment: risk none

This is a problem area for the service providers. Deploying a IPTV,
VoIP or any next generation network without any visibility to the
security of the devices used. Do I need to even say the tools used
here? They are not always legal... Legal solutions again include
robustness testing tools, and some fault injection tools.

deployment -> disclosure: risk minor

Most robustness flaws (which security flaws also are) reveal
themselves in unexplainable crashes and memory leaks and other
reliability flaws. But exploits do not exist, yet.

disclosure -> patch availability: risk high

Everyone knows the flaw. Reactive security solution vendors run for
the bucks. Vendors panic and do all they can in the time they have
been given to fix the flaws, with bad results usually. Everyone joins
in the fun of crisis communication.

patch availability -> patch deployment: risk moderate

Good and easy users are fixed immediately and automatically. More
critical systems that might take a longer time, but after verification
processes can finally deploy the corrections and
work-arounds.

patch deployment -> product retirement: risk none

There is no need to consider this flaw. It is fixed. Vulnerability is
eliminated. Security solution protections and rules could be removed,
if so decided.

Now what happens if we remove the "disclosure" from the process? What
if we are able to deploy the patch without anyone noticing?
Unfortunately the more widely deployed the product is, the more
reverse-engineers (including security vendors) each security patch
will attract. There is a partial disclosure related to every patch and
update. But what if all customers would deploy the correction in time
before the disclosure? The customer would avoid the peak in the risk.

So I would propose that consider every single update and upgrade as a
security correction. Try to use the latest solid versions of the
software if that is available without much extra cost. Do not run
after the X.0 versions though, because those contain most flaws
usually.

So even if that annoys IDS and perimeter defence vendors, and lazy
administrators who do not want to wake up at midnight to deploy the
latest corrections to their systems, I am completely against public
disclosure. It is very unfortunate that many people require
proof-of-concept before they understand what a buffer overflow is. But
after the first one they have encountered, people usually learn fast.

So Steve I agree most vendors would prefer fixing the security
problems quietly like any other quality problems, and in my opinion
this is a perfect method of handling vulnerabilities.

For more information on disclosure, see also our earlier work at:
http://www.ee.oulu.fi/research/ouspg/sage/disclosure-tracking/

Just my European cent on the topic,

/Ari

PS: We at Codenomicon (www.codenomicon.com) usually urge our customers
to fix the problems quietly, without public disclosure. And there can
be hundreds of security problems our tools find in each and every
product. Any communication interface is a security risk. And we can
test almost any critical interface you would need:
http://www.codenomicon.com/products/all.shtml


On Mon, Apr 17, 2006 at 12:00:01PM -0500, dailydave-request () lists immunitysec com wrote:
From: "Steve Manzuik" <smanzuik () eeye com>

My biggest problem with the whole silently fixed patches are that it
makes it tougher for the large end users to do a proper risk assessment
of the patch.  Most of the large enterprises I have been exposed to all
but ignore the vendor risk rating and try to assign a patch their own
internal risk rating.  Without knowing what is truly fixed, it is pretty
tough to do this. 

The next problem with this, that Andre and I demonstrated in our talk,
was that certain signature based protections, do not protect against the
silently fixed vulnerabilities.  So organizations that take their time
to patch because they feel that their security product is protecting
their systems might be surprised.

[snip]

Sadly, this is not just a MS problem.  I will go out on a limb here (and
probably get slapped for it) and say that *most* vendors practice this.


Current thread: