Information Security News mailing list archives

Re: Researchers predict worm that eats the Internet in 15 minutes


From: InfoSec News <isn () c4i org>
Date: Thu, 24 Oct 2002 01:39:40 -0500 (CDT)

Forwarded from: Russell Coker <russell () coker com au>

On Wed, 23 Oct 2002 07:58, InfoSec News wrote:
Forwarded from: Robert G. Ferrell <rferrell () texas net>

The three authors of the research, published two months ago, present
a future where worm-based attacks use "hit lists" to target
vulnerable Internet hosts and equipment, such as routers, rather
than scanning aimlessly as the last mammoth worm outbreaks, Nimda
and Code Red, did last year.

The operative term here is "vulnerable."  Properly secured systems run
very little risk of infection by "killer worms," or anything else.

I agree with that statement, but I suspect that our ideas of "properly
secured systems" differ.

True 0-day exploits that make use of previously totally unsuspected
vulnerabilities and bypass properly configured firewalls are
exceedingly rare.

0-day exploit attacks seem rare.  However they are quite possible.  I
think that everyone on this list is just waiting for the next serious
security hole in to be found in BIND, Sendmail, or IIS.  A malicious
person could write an entirely new virus without delivery mechanism
and just wait for the next suitable security hole to be published
before the patch.  They could probably put the appropriate delivery
code into their virus shell and release it faster than the vendor
could hope to write a fix (let alone release it and have customers
install it).

'Taking down the Internet' will involve a lot more than getting a
bunch of idiots to open attachments with names like "readme.exe."
It's important to draw a distinction between clogging the Internet
with spurious traffic (Denial of Service) and actually disrupting
routing.

I am curious, if you had a good number of well connected machines
sending out minimum size packets to random addresses, how hard would
it be to overload a core router?

I guess that people who design core routers count on traffic having an
average packet size of at least 400 bytes and there being some
possibility of route caching to save route lookup time.  If lots of
small packets significantly reduce the average packet size (and
therefore increase the number of packets to handle) while also
increasing the number of destination addresses in use (and thrash the
cache) then could this cause a router to suddenly start dropping
significant numbers of good packets?

I would be interested in seeing some comments from someone who really
knows how big routers work in regard to this issue.

DoS is potentially serious if massively distributed, but even the
worst DDoS attacks are temporary. Incapacitating routers or root
name servers, on the other hand, would have far more lasting effects
on Internet communications, but widespread failure of these devices
can be obviated by as simple a trick as ensuring heterogeneity of
equipment (by their nature worms

What if you have millions of machines suddenly start sending out
random DNS queries directly to the root servers with false source
addresses, to any local name servers configured in the OS, and to the
local country servers for the TLD of their country?

No matter what types of equipment was used that would result in ISP
DNS caches being taken out first (causing the more technical users to
configure their machines to use the root servers instead of a DNS
forwarder) and then eventually the root servers and servers for the
TLD's being taken out next.

But if these same software vendors would take the responsibility
upon themselves to train their programmers to code securely and not
to release software until it was exhaustively tested for security
vulnerabilities, the need for scrambling to release/install patches
would disappear.  As an example, you can't target a buffer overflow
against software that has no runaway string operations or other
variables that lack bounds checking.

I recommend reading the NSA paper "The Inevitability of Failure"  
http://www.nsa.gov/selinux/inevit-abs.html .  It makes a good case
that merely fixing buffer overflows etc is not enough for good
security.  It's a good part of providing a secure system (and a part
that has not been done properly on most OSs and applications), but you
need far more.

Concentrating on patching mechanisms is treating a symptom, not the
disease.  Patching will never run better than a poor second to
secure coding.

Absolutely.  Note that "secure coding" requires avoiding excessive
privilege, and that can only properly be provided by Mandatory Access
Control.

-- 
http://www.coker.com.au/selinux/   My NSA Security Enhanced Linux packages
http://www.coker.com.au/bonnie++/  Bonnie++ hard drive benchmark
http://www.coker.com.au/postal/    Postal SMTP/POP benchmark
http://www.coker.com.au/~russell/  My home page



-
ISN is currently hosted by Attrition.org

To unsubscribe email majordomo () attrition org with 'unsubscribe isn'
in the BODY of the mail.


Current thread: