Dailydave mailing list archives

Re: Coverage and a recent paper by L. Suto


From: Charles Miller <cmiller () securityevaluators com>
Date: Mon, 15 Oct 2007 20:36:41 -0500

I'm giving talks (a shorter and a more detailed one) on exactly the  
topic of code coverage and fuzzing this weekend at Toorcon:

http://toorcon.org/2007/event.php?id=34
http://toorcon.org/2007/event.php?id=60

First, you're all quite right that code coverage isn't a magic bullet  
thats going to find bugs for you.  You're also correct that code  
coverage doesn't necessarily imply the code has been adequately  
tested.  (The obvious example is the strcpy that could be covered,  
but only with small amounts of data used as the source).  That said,  
let me explain why I think code coverage can be an important tool for  
fuzzing.  Consider the following two very common scenarios:

1. You run a finite, deterministic fuzzer like SPIKE and don't find  
any bugs (or at least any good ones).
2. You run a random fuzzer like GPF for some amount of time and don't  
find any bugs

What do you do in these situations?  How do you change the SPIKE  
configuration file to attempt to improve your chances of finding  
bugs?  How long do you run GPF before you give up?  For a random  
fuzzer like GPF, should you choose a different initial test case  
(i.e. PCAP file or whatever)?  Where do you start with your static  
analysis?  Would a different fuzzer have been a better choice?

I think in these situations it makes sense to look at the application  
and consider what code has been executed and what hasn't.  After all,  
if you haven't even executed a particular line of code, you'll  
definitely not find a bug in that line with fuzzing.

Come to Toorcon and we can argue over a beer :)

Charlie


On Oct 15, 2007, at 3:24 PM, matthew wollenweber wrote:

Personally, I don't understand the current trend in fuzzer research  
to go obtain full code coverage. Sure, it's nice to check  
everything and have a fuzzer  traverse all the functions in the  
code, but maybe that's at the cost of doing it all poorly. If you  
have a fixed amount of time to do the assessment, I'd rather spend  
the time where it's needed.  As you said, it's better to thoroughly  
test the code in spots where the bugs are.

I do like instrumenting fuzzing and measuring where the fuzzer is  
being effective and/or spending it's time. It's useful both to see  
where the problems cluster and/or to give the thing a kick if it  
gets stuck.

While it's not for web apps, I find the work the Greg Hoglund and  
his guys at HBGary have done to be a step in the right direction.  
His tool isn't really meant for fuzzing (at least from my limited  
knowledge of it), but it takes an RE approach to find what's  
important and focus there. To achieve this, it measures function  
traversal, but rather than focusing everywhere, it filters out  
irrelevant functions (background noise). For example, if you want  
to debug  a complex crypto routine that's attached to a graphical  
display, you don't want to waste your time in the graphics.

Most of Hoglund's recent talks have featured at least snippets of  
HBGary Inspector for anyone interested. Unfortunately, the software  
itself has too many zeros in the price tag it for most people to buy.


_______________________________________________
Dailydave mailing list
Dailydave () lists immunitysec com
http://lists.immunitysec.com/mailman/listinfo/dailydave


Current thread: