Dailydave mailing list archives

Re: Rcov - interactive code coverage for fuzzers


From: Jared DeMott <demottja () msu edu>
Date: Wed, 28 Mar 2007 00:41:36 -0400

So I think everyone brought up some great points.  (Though the
discussion would have probably been better suited for
fuzzing () whitestar linuxbox org.)

I think it's cool that Kowsik is blogging on fuzzing - I believe I
covered some of these points in a 2006 defcon talk (attack surface,
complete CC != 0 vuls, etc.).  Sergio & Kowsik note that discovering the
attack surface prior to fuzzing could be helpful.  In fact, that leads
to another important observation -- when doing interface testing we'll
never reach 100% CC.  It would be more accurate to say 100% attack
surface coverage != 0 vulnerabilities.

The distinction is important because the attack surface is typically
only a fraction of the total code.  Determining the total functions or
basic blocks that lie on an attack surface can be a challenge.  Here's a
lame example: If it's found that 5 of 100 total functions read in cmd
line args and 15 of 100 functions handle network traffic, we simply take
the ratio:
    local attack surface = 5/100 or 5% of total code
    remote attack surface = 15/100 or 15% of total code
A remote fuzzing tool that hits 15 funcs would be said to have hit 100%
of the remote attack surface.  But did it hit all combinations of paths
with all combinations of data? :)

Anyway, since I enjoy the TV show myth busters here goes:

MYTH 1:
"This thread was not about vendors ..."
Yet I submit that anytime a fuzzing vendor posts research to a high
profile security mailing list instead of the fuzzing mailing list -- it
is about vendors.
And since vendors have been brought up, I posted an interesting option
to vendors last week on the fuzzing list.  Also, I'm curious how people
on this list feel about the new "value proposition".  Instead of "just"
tools to test protocols, MU and BreakingPoint are selling something
along the lines of:
Metasploit functionality for IDS/IPS testing + attack/pen testing + load
testing + protocol fuzzing + ??.  I believe MU calls this the "tiger
team in a box."  BreakingPoint calls it "the most powerful network test
system on the planet".  I just wish we could get more data on the exact
internal operations of ALL commercial fuzzers.  When a software QA guy,
security group, or big network company test department is considering
their specific needs, and find themselves on the fence about open
source, in-house, various fuzzing companies -- and when it comes time to
write a big check -- they need to be sure of what they're paying for.

MYTH 2:
"Fuzzers don't automagically become better because of code coverage"
Actually I'd like to give a talk this year at Black Hat about how they
might! :)  This is the first paragraph of my proposed thesis:

"Run-time code coverage analysis is feasible and useful when application
source code is not available.  An evolutionary test tool receiving such
statistics can use that information as fitness for pools of sessions to
actively learn the interface protocol.  We call this activity grey-box
fuzzing.  We propose that, when applicable, grey-box fuzzing is more
effective at finding bugs that RFC compliant or capture-replay mutation
black-box tools."

Of course, I have no proof of this yet, but that's why it's called
research! :)

Blessings,
Jared
_______________________________________________
Dailydave mailing list
Dailydave () lists immunitysec com
http://lists.immunitysec.com/mailman/listinfo/dailydave


Current thread: