nanog mailing list archives

Re: Real-Time Mitigation of Denial of Service Attacks Now Available With AT&T


From: "Wayne E. Bouchard" <web () typo org>
Date: Wed, 2 Jun 2004 09:28:35 -0700


On Wed, Jun 02, 2004 at 11:25:24AM -0400, Jon R. Kibler wrote:
John Obi wrote:
... since DDoS is the
nightmare of the internet now.


The sad fact is that simple ingress and egress filtering would 
eliminate the majority of bogus traffic on the Internet -- including 
(D)DoS attacks. If all ISPs would simply drop all outbound packets 
whose source address is not a valid IP for the subnet of origin, 
and all inbound packets that do not have valid source IP addresses, 
the DDoS problem would be (for all intents and purposes) fixed. If 
proper filtering was done, then any DoS attacks would have to have 
either valid source IP addresses, or IP addresses that spoofed IPs 
within their network of origin. In either case, identifying and 
shutting down the attackers would become a greatly simplified task 
compared to the mess it is today.

Sorry to say this, but IMHO this is a naive view. It would only
marginally lessen the severity of attacks. The bulk of machines being
used for DOS attacks are compromised hosts and largely
intercontinental (from observations made from attacks against my
clients.) There are already machines sequentially opening HTTP
sockets, retrieving a particular URL, and repeating that process
thousands of times. These sorts of attacks can't be spoofed. And yet
when I attempt to contact the administrators of those machines (even
when I find them in the US under the auspices of major service
providers with "good" abuse departments), I get zero response to the
problem. So then if if the people writing this DOS software don't care
about hiding the addresses for this type of attack, why hide the
addresses from others? The same sort of damage will be done wether the
addresses are spoofed or not.

Filtering traffic isn't the principle issue (though it will help.) The
real problem is administrators who either don't care or flat refuse to
do anything about it. (Yes, the word "NO" has been said many times
when I've asked someone to investigate a possibly compromised host
even when supported by many hundreds of kilobytes of filter logs.)

And then of course, even if they DO respond, the end user is the one
who ultimately has to solve the problem and good luck getting THAT to
happen. (Yes, I know I'm a bit cynical about this but thats the result
of long and hard experience fending off such events.)

Why no filtering by ISPs? "Because it takes resources and only benefits
the other guy" -- unless your network is the one under attack.

Every one of my connections has rpf enabled unless there is a very
valid reson not to. (and thats done case by case.) Recent improvements
(I say recent, meaning over the last 5 years or so) have made such
efforts markedly more effective. The problem, as you state, is getting
the world at large to utilize these mechanisms.

Maintenance of the ACLs should not be the issue. A single ACL for each
subnet would be all that would be required for egress filtering. About
30 ACLs on an inbound border router would be required for ingress 
filtering. Keeping the ingress ACLs current is a brain-dead task -- just
subscribe to the bogon mailing list at cymru.com.

For smaller networks, yes. For larger networks, they can have 2 or 3
hundred connections to a single border router with alloted IP space
varying daily. Meaning there would have to be frequent updates to an
upstream ACL (which may well be across an OC48) and lead to many human
caused outages. Simply not practicle for all networks.

-Wayne


Current thread: