Dailydave mailing list archives

Re: On the Effectiveness of Address Space Randomization


From: spender () grsecurity net
Date: Fri, 29 Oct 2004 15:15:18 -0400

You don't have to bandy words here. We'd love to hear your (or Brad's)
comments for those of us to whom "glaring mistake" is not so glaring.
:>

On page 8 they talk about crash detection and reaction mechanisms, and
imo poorly handle the question of DoS vs. owned machine in terms of cost
to the company.  What's more expensive to Amazon: 1 hour of downtime, or
100k stolen credit card numbers?  How about lost *future* business due
to the insecurity of their systems?  They don't back up their position
with any numbers or evidence, but only a "belief."  They discuss
briefly about crash detection/reaction mechanisms that block the attack
from connecting instead of DoS'ing everyone, and in the same paragraph
quickly dismiss it because of zombie networks numbering in "hundreds of
thousands of compromised hosts" (and it's a bit of a stretch to claim
this kind of situation.  what blackhats do you know of using 100K+
machine zombie networks to launch complex, private exploits at a single
server?)  They're also depending too much on their exaggeration of how
easy it is to bruteforce the randomization (more on that later).  Last
time I checked, 2^24 (the # of possible locations of a stack address) is
much larger than 100,000.

They also fail to realize that PaX allows for dumping core on each
detected exploit attempt.  So it's not to the attacker's benefit to be
bruteforcing the machine, as the administrator need only have someone
analyze the core dump, and the attacker's precious private bug will be
killed.

They also talk the effect of load balancing on bruteforcing.
I don't understand why they think a watcher running locally on one of
the servers would be unable to detect an attack, since your chances of
owning the single machine based on the packets you send to it is
completely independent of its participation in load balancing.  Maybe
they're making a stupid implicit assumption that a watcher would need
10,000 exploit attempts on the single machine before it realizes it's
being bruteforced.  In reality, this number would be < 5 in a wide time
interval.  They talk about the difficulty of making decisions for the
load balancing network, but apparently have never heard of log
accumulators/correlators or Prelude.

Also on page 8, it says:
"it is, nevertheless, also true that it is difficult to distinguish
exploitable vulnerabilities from mere (segfault-inducing) denial of
service.  Neither an automated watcher program nor a system
administrator working under time pressure can be expected to make the
correct determination."

This is simply not true.  If this were the case, every time a program
segfaulted, you would see a PaX log, but that's not what happens.

The major problem with the paper, however, is that they're attacking a
straw man.  The point of the paper is supposedly to show how ASLR can be
broken in a generic sense, yet they exploit a specific class of bug with
a specific technique, and make use of a convenient uniqueness of the
contents of the stack for apache that prevents them from having to guess
an additional 24 bits of randomization.  Additionally, it looks as if
the machine was attacked over a LAN instead of over the internet (where
the connect latency would be 2 orders of magnitute larger), which
greatly influenced their result of 216 seconds for the attack.
Additionally, this was dependent on their apache being configured to
allow 150 simultaneous connections (you won't see this kind of behavior
with other daemons).  So it looks like they're setting up
for themselves in all cases optimum, unrealistic conditions, and then
claiming that this instance says something about ASLR's effectiveness in
general.

-Brad

_______________________________________________
Dailydave mailing list
Dailydave () lists immunitysec com
http://www.immunitysec.com/mailman/listinfo/dailydave


Current thread: