Bugtraq mailing list archives

More impact from CRII


From: "Jon Austin" <admin () flux net au>
Date: Mon, 6 Aug 2001 15:41:41 +1000

moderator, please pass to list. Comments below are Simon's.

----- Original Message -----
From: "Simon Hackett" <simon () internode com au>
To: "Jon Austin" <admin () flux net au>
Sent: Monday, August 06, 2001 3:11 PM
Subject: Re: [Oz-ISP] Transparent Proxy servers being messed up by CodeRed
II


I'm not on bugtraq. Please feel free to forward my message to the
list on my behalf. NB we're also seeing impacts on other platforms
that use NAT (whose NAT connection tables are getting overstressed by
the same issues)

Simon

----- Original Message -----
From: "Simon Hackett" <simon () internode com au>
To: "Roddy Strachan" <roddy () satlink com au>; "Aus-Isp"
<aussie-isp () aussie net>
Sent: Monday, August 06, 2001 2:22 PM
Subject: [Oz-ISP] Transparent Proxy servers being messed up by Code Red II
[was:Re: [Oz-ISP] Very strange problem]


I might have your answer.

Code Red and it's new (and quite different) derivative, Code Red II,
have a nasty side effect for those of us running transparent proxy
servers.

It tends to bring them down (out of service).

I've reported this to my own vendor concerned (Cisco), but it'll
probably affect practically all transparent proxy servers.

What's happening? Read on.

Consider that your customer base, if you have a reasonably sized one,
will have lots of compromised hosts inside it already (and more by
the hour). At our place on Sunday alone I was seeing over 35,000
outbound queries from compromised hosts in our customer base, going
back out to the Internet at large.

These attack attempts involve attempts to do an http 'get' to random
IP addresses. Many (indeed most) won't respond at all to this query.

If you are running a transparent proxy, these requests will be
captured and re-issued by your transparent proxy. For each of these
queries, one connect table entry will be used up as your proxy tries
to open a tcp connection to the destination host concerned.
Additional buffer space and other resources will be consumed in
buffering the pending request from the worm-compromised system while
your server waits, in vain, for the request to complete, before
eventually timing it out.

While its doing this, the compromised hosts are making other similar
attempts - up to 600 in parallel, per compromised host, in the case
of some variations of the latest worm.

Think about what you think the concurrent connection handling
capabilities of your proxy are, and imagine how quickly those
resources will be chewed up and blocked by even half a dozen
compromised hosts inside your downstream customer base.

Oops.

So you can blame Microsoft's lax coding practices, again, this time
for costing you the money you'll blow in extra downloads, due to
needing to turn your transparent proxy off until this particular worm
blows over.

You can pick this happening, easily - just log into your proxy server
and display the tcp connection table. If its got lots of 'SYN_SENT'
or similar entries, those are probably (in the main) worm-generated
cruft that's eating your system resources for lunch.

On my transparent proxy platform (Cisco) I've been able to work
around the problem by configuring blocking rules that notice and stop
the get requests from the worm. These work on some code releases of
the Cisco Cache Engine code (2.x releases) but while they also work
on the latest (3.x) code train (madly blocking attack attempts),
unfortunately my observation today is that my Cache Engine 3.x system
is still being killed by the worm attacks,

I'm still trying to work out why (but the blocking rule
implementation is different in 3.x and that is probably part of the
issue). My 3.x based engine lasts about 2 minutes before all web i/o
has come to a standstill, each time I try to fire it up. Oh well,
back to my old system (glad I still have it!)

[yep, I'm already taking this up with Cisco, of course; Dealing with
it may not be a simple task; For interest, the 'fix' on a 2.51 system
is:

rule enable
rule block url-regex ^http://.*www\.worm\.com/default\.ida$
rule block url-regex ^http://.*/default\.ida$

]

On (say) squid based transproxy code, you should be able to set up a
blocking rule in the squid configuration file. Its not hard to find a
request to use as an example to block - just look for 'default.ida'
in your proxy logs (and be prepared to be surprised at how many of
them you find).

Oh joy.

So, Roddy, maybe your problem, on your unchanged, working-for-ages
configuration, is that while you didn't change your server, the world
around you just got one notch less friendly.

Hope this helps someone else.  It will probably affect practically
all vendors of transparent proxy code.

Cheers,
Simon Hackett


Current thread: