Dailydave mailing list archives

the future of fuzzing [was: Rcov]


From: Gadi Evron <ge () linuxbox org>
Date: Mon, 26 Mar 2007 19:05:58 -0500 (CDT)

On Mon, 26 Mar 2007, Kowsik wrote:
We just released rcov-0.1, an interactive/incremental code coverage
tool to assist in building effective fuzzers.

Quick summary:

- It's a WEBrick browser-based application (ruby)
- Uses gcov's notes/data files to get at blocks and function summaries
- Interactively/incrementally shows the coverage information while fuzzing
- Uses ctags to cross reference functions/prototypes/definitions/macros

Hi Kowsik, thanks for this.

I have a few notes though, as I believe this can be taken much further
(at least my studies so far show that).

We have three levels or layers (depends on approach):
1. Building better fuzzers (which you cover).
2. Helping the fuzzing process, fuzzing better.
3. Making the process of finding the actual vulnerability once an
   indication is found (a successful test case, or as they say in QA, a
   passing one) easier.

Several folks in the past few months have said that fuzzing isn't new and
has been done for years - that much is true.

Some folks also said that fuzzing is as simple as it gets and has no where
left to evolve. That is indeed very much false.

Code coverage, static analysis, run-time analysis.. etc. all have a place
in the future of fuzzing.

I see fuzzers development in coming years as changing the term "dumb
fuzzing" to mean today's protocol-based smart fuzzing, and "smart
fuzzing" being about what interactive changes are happening as you fuzz.

The most that we see today (in most cases) is the engine running
undisturbed, while the monitor (if such even exists) being a simple
debugger.

Evolving host and network monitoring to use profiling technologies, map
functions and paths, watch for memory issues, etc. is fast coming.

Today, changing the action of a fuzzer as it is running is difficult
(there is no real Driver, just an Engine). A simple example for this
evolution could be watching for CPU uage. If the CPU usage spikes it could
mean:
1. We are sending too many requests per second - we should slow down the
engine.
2. (if for the thread itself) We are on to something, we should explore
this attack (likely 10000 "attacks" we went through) or adjust to a
different fuzzing engine to explore that particular section of the
program (as we mapped it - code coverage again).

The two don't easily work together, not to mention even stopping a fuzzer,
rewinding it or God forbid running a different one at the same time (on
the same instance anyway).

Which brings us to distributed fuzzing... but that's a whole
different subject yet again.

Fuzzing has a long way to go, and we didn't even really start to explore
full intergration with static analysis tools (other than with results).

We had a discussion on the fuzzing mailing list recently about genetic
fuzzing, but I dam not really a math geek. Jared can explain that one
better... and so on.

All that before we explore uses for fuzzing outside of the development
cycle (mostly security QA) and vulnerability research, which is with
client-side testing. Perhaps fuzzers will help us force the hand of
software vendors to develop more robust and secure code.

Working for a fuzzing vendor I am only too familiar with the Turing
halting problem and seeking reality in the midst of eternal runs, but the
most interesting thing I found in the past few months (which wasn't
technical) is the clash of cultures between QA engineers and Security
professionals. It will be very interesting to see where we end up.

Thanks,

        Gadi.

--
"beepbeep it, i leave work, stop reading sec lists and im still hearing
gadi"
- HD Moore to Gadi Evron on IM, on Gadi's interview on npr, March 2007.

_______________________________________________
Dailydave mailing list
Dailydave () lists immunitysec com
http://lists.immunitysec.com/mailman/listinfo/dailydave


Current thread: