Dailydave mailing list archives

Re: Coverage and a recent paper by L. Suto


From: "matthew wollenweber" <mwollenweber () gmail com>
Date: Mon, 15 Oct 2007 16:24:54 -0400

Personally, I don't understand the current trend in fuzzer research to go
obtain full code coverage. Sure, it's nice to check everything and have a
fuzzer  traverse all the functions in the code, but maybe that's at the cost
of doing it all poorly. If you have a fixed amount of time to do the
assessment, I'd rather spend the time where it's needed.  As you said, it's
better to thoroughly test the code in spots where the bugs are.

I do like instrumenting fuzzing and measuring where the fuzzer is being
effective and/or spending it's time. It's useful both to see where the
problems cluster and/or to give the thing a kick if it gets stuck.

While it's not for web apps, I find the work the Greg Hoglund and his guys
at HBGary have done to be a step in the right direction. His tool isn't
really meant for fuzzing (at least from my limited knowledge of it), but it
takes an RE approach to find what's important and focus there. To achieve
this, it measures function traversal, but rather than focusing everywhere,
it filters out irrelevant functions (background noise). For example, if you
want to debug  a complex crypto routine that's attached to a graphical
display, you don't want to waste your time in the graphics.

Most of Hoglund's recent talks have featured at least snippets of HBGary
Inspector for anyone interested. Unfortunately, the software itself has too
many zeros in the price tag it for most people to buy.


On 10/15/07, Dave Aitel <dave () immunityinc com> wrote:

-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA1

http://ha.ckers.org/files/CoverageOfWebAppScanners.pdf

He compared NTOSpider/Appscan/Webinspect - and NTOSpider "won".

Without the full vulnerability reports and the VM's of the vulnerable
apps, I'm not going to dwell on the comparison of tools, except to say
it's interesting, but I will say that all this focus on "code
coverage" is a bit strange. Vulnerabilities, like fish, tend to
cluster in particular places. Having 10% code coverage is perfectly ok
if it's the code that has the bugs. And you can't see race conditions
with code coverage tools.

Also, most of the value of instrumentation is that when built into
your attack tool you get a real-time human-usable view into the guts
of the application. This is why I don't think byte-code
instrumentation has huge advantages over just hooking Win32 API's. But
I don't have a byte-code parser yet either. :>

Speaking of race conditions, I'm happy to announce that Immunity has
+= Paul Starzetz (http://marc.info/?a=107032640300001&r=1&w=2).

- -dave

-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1.4.6 (GNU/Linux)

iD4DBQFHE52HB8JNm+PA+iURAk9xAKCzXrmHP7GdURmWvQqDLQx9FOn8FgCYnfJI
m3XYC6cV71su3IJLIC+qZw==
=RQ5q
-----END PGP SIGNATURE-----

_______________________________________________
Dailydave mailing list
Dailydave () lists immunitysec com
http://lists.immunitysec.com/mailman/listinfo/dailydave




-- 
Matthew  Wollenweber
mwollenweber () gmail com | mjw () cyberwart com
www.cyberwart.com
_______________________________________________
Dailydave mailing list
Dailydave () lists immunitysec com
http://lists.immunitysec.com/mailman/listinfo/dailydave

Current thread: