nanog mailing list archives

Re: Solution: Re: Huge smurf attack


From: "Alex P. Rudnev" <alex () Relcom EU net>
Date: Tue, 12 Jan 1999 12:35:16 +0300 (MSK)

On Mon, 11 Jan 1999, Dan Hollis wrote:

Or perhaps someone would like to take a proactive approach at scanning for
smurfable networks and closing them before the script kiddies find them?

There are at least 2 different teams already doing this.  The trouble is,
who has the authority to disconnect smurf amp networks?  Nobody,
really...but the reality is that if an RBL-like system were setup for
You can public the access list for outgoing links discarding initial 
packets to this amlifyer, A lot of ISP are ready to install such lists.

Or the other idea - if someone install BGP anouncing this addresses as 
the /32 hosts (like RBL does for spam relays) anyone can block this 
addresses to NULL and avoid attempts to run smurf from their customers.

smurf amp networks, and if some of the larger backbones subscribed to it,
smurf amp networks would get fixed real fast.

Can any of the readers working for backbones (like UUNet, Sprint, C&W)
speak up and tell us if there's a chance in hell their networks would
subscribe to such a service?
It's of great interest. Through, pay attention to:
(1) to call 'smurf' broken servers are used;
(2) most of such servers are in non-commercial networks (scientific, for 
example);
(3) such networks often have their own peering relations instead of using 
UUnet and other monsters for the service.




----don't waste your cpu, crack rc5...www.distributed.net team enzo---
 Jon Lewis <jlewis () fdt net>  |  Spammers will be winnuked or 
 Network Administrator       |  nestea'd...whatever it takes
 Florida Digital Turnpike    |  to get the job done.
______http://inorganic5.fdt.net/~jlewis/pgp for PGP public key________



Aleksei Roudnev, Network Operations Center, Relcom, Moscow
(+7 095) 194-19-95 (Network Operations Center Hot Line),(+7 095) 239-10-10, N 13729 (pager)
(+7 095) 196-72-12 (Support), (+7 095) 194-33-28 (Fax)



Current thread: