Full Disclosure mailing list archives

Re: Arbitrary DDoS PoC


From: Gage Bystrom <themadichib0d () gmail com>
Date: Mon, 13 Feb 2012 05:17:02 -0800

Uhh...looks pretty standard boss. You aren't going to DoS a halfway decent
server with that using a single box. Sending your request through multiple
proxies does not magically increase the resource usage of the target, its
still your output power vs their input pipe. Sure it gives a slight boost
in anonymity and obfuscation but does not actually increase effectiveness.
It would even decrease effectiveness because you bear the burden of having
to send to a proxy, giving them ample time to recover from a given request.

Even if you look at it as a tactic to bypass blacklisting, you still aren't
going to overwhelm the server. That means you need more pawns to do your
bidding. This creates a bit of a problem however as then all your slaves
are running through a limited selection of proxies, reducing the amount of
threats the server needs to blacklist. The circumvention is quite obvious,
which is to not utilize proxies for the pawns....and rely on shear numbers
and/or superior resource exhaustion methods....
On Feb 13, 2012 4:37 AM, "Lucas Fernando Amorim" <lf.amorim () yahoo com br>
wrote:

With the recent wave of DDoS, a concern that was not taken is the model
where the zombies were not compromised by a Trojan. In the standard
modeling of DDoS attack, the machines are purchased, usually in a VPS,
or are obtained through Trojans, thus forming a botnet. But the
arbitrary shape doesn't need acquire a collection of computers.
Programs, servers and protocols are used to arbitrarily make requests on
the target. P2P programs are especially vulnerable, DNS, internet
proxies, and many sites that make requests of user like Facebook or W3C,
also are.

Precisely I made a proof-of-concept script of 60 lines hitting most of
HTTP servers on the Internet, even if they have protections likely
mod_security, mod_evasive. This can be found on this link [1] at GitHub.
The solution of the problem depends only on the reformulation of
protocols and limitations on the number of concurrent requests and
totals by proxies and programs for a given site, when exceeded returning
a cached copy of the last request.

[1] https://github.com/lfamorim/barrelroll

Cheers,
Lucas Fernando Amorim
http://twitter.com/lfamorim

_______________________________________________
Full-Disclosure - We believe in it.
Charter: http://lists.grok.org.uk/full-disclosure-charter.html
Hosted and sponsored by Secunia - http://secunia.com/

_______________________________________________
Full-Disclosure - We believe in it.
Charter: http://lists.grok.org.uk/full-disclosure-charter.html
Hosted and sponsored by Secunia - http://secunia.com/

Current thread: