Dailydave mailing list archives

Re: Solutions


From: Andre Gironda <andreg () gmail com>
Date: Tue, 6 Jul 2010 16:32:59 -0500

On Thu, Jul 1, 2010 at 3:52 PM, Rich Mogull <rmogull-dd () securosis com> wrote:
So it's a combination of what you describe, but some other pieces:

Rich,

I think you may have interpreted Dave's email incorrectly. What did
Dave describe? Let's take a look:

The major problem with 90's era technology (i.e. scanners/sniffers!) is that they are
in a very high noise/low signal environment. This is as true for static code
analysers as it is for IDSs and Web Application Firewalls.

Immunity sees lots of success (and has for many years) with organizations that have
done high level instrumentations against their applications, and then used powerful
data mining tools to look at that data.

Dave says that WAFs and IDSs have at least one major problem that
PREVENTS them from working / being useful.

Dave also says that secure static code analyzers also have at least
one major problem that prevents going into the useful category. I am
going to have to agree with Dave on some of these brilliant points.

Dave says that "instrumentation" combined with "data mining tools"
works. He's talking about corpus distillation.

1. Browser session virtualization and instrumentation, with some defensive technologies. Kind of like Trusteer is d

Why bring this up when we know it's not going to work? If you want to
add more anti-phishing support to the browser, try:
http://stackoverflow.com/questions/3080896/what-is-the-best-way-to-stop-phishing-for-online-banking/3080963#3080963

Your use of the word, "instrumentation" above is superfluous!
Instrumenting browsers means to load IE with PIN and then hit the
browser with a bunch of HTTP responses. Then, hit it with the same
HTTP responses, noting the code coverage. The code coverage should be
consistent between regressions. Then add a new test case. If the code
coverage goes up, then this means that you are hitting a new code path
(note: I don't want to argue about the different types of code
coverage right now, but it could be something other than a "path").
That new code path probably contains bugs, so then you set out with
new test cases that are similar in order to find them. This is similar
to asserting for similar bugs to fix during continuous prevention
development.

Also: I'm ashamed that you mentioned a "security product vendor" here.
Thanks for ruining my holiday weekend.

3. WAF for some basic blocking and outside the web server monitoring. If all connections are routed through the 
gateway first, the WAF would probably be integrated into the gateway. Different than using a WAF how they are 
generally deployed today- more a tool for instrumentation/monitoring.

Didn't Dave mention that WAFs do not work?

Again, here is another use of your superfluous "instrumentation"
catch-phrase. I think you are misusing the word?

Perhaps you are relying on only one portion of the dictionary
definition of "instrumentation (computer programming)":
http://en.wikipedia.org/wiki/Instrumentation_%28computer_programming%29

You seem to care most about traditional IT monitoring, which involves
Event Logs, or their equivalent.

Dave appears to be talking about "code instrumentation", referring to
hit-tracing specifically if I understand him correctly.

4. More instrumentation in the web app server, and web app anti-exploitation (mod, the RTE stuff Fortify is just 
starting to play with that no one is buying).

At least this time you identified not only the incorrect product name,
but also the fact that the product does not help us [because it's a
WAF] by indicating that nobody is purchasing the product.

The product you mention is Fortify RTA. Dave would be more interested
in hearing about Fortify PTA, which does something a little more close
to what he is describing.

5. Database activity monitoring deployed in inline mode with active blocking. Most effective is the Secerno-style 
whitelisting, but the Imperva/Guardium approaches also help.

I have no idea what DAM and these products have to do with code
instrumentation. Please stop with the incorrect comparisons.

6. Something to tie all this crap together.

Rich -- this is very cool! At least you understand that your #1-5 is a
bunch of crap! I'm still not sure why you brought it all up. We could
have just summarized with:

"Hi. This is Rich Mogull. DAM, WAF, and browser virtualization are
crap. Let's try assessing apps for vulnerabilities before they go into
production or customer release like the pro's do: by instrumenting our
apps with PIN and/or Fortify PTA and by modifying an existing test
harness to produce fuzz/robustness test cases."

Also see: The Art of Software Security Assessment; Chapter 4 "Auditing
Review Process"; Code-Auditing Strategies, Code Comprehension
Strategies, CC5: "Trace Black-Box Hits" (and other relevant portions
of that book. I'll be happy to point you right at them if you need
more info)

However, the authors of TAOSSA did not take CC5 to the next-level,
which would be to perform data mining on the results over time and
across many apps that interact with the target app. In my example
above, the HTTP responses would come from web servers -- and those
test cases would be used against web browsers.

All these bits and pieces are out there, many deployed in limited ways, but they haven't been tied together well. 
Imperva and Secerno (acquired by Oracle, and via an F5 partnership) are heading down the WAF + DAM route to better 
knock out malicious SQL. Trusteer is doing the browser to gateway stuff, and there are a few custom app-level 
instrumentation projects like you describe.

Hey cool. Too bad these have nothing to do with fixing actual
problems, but instead by creating new ones.

Until WAF and DAM vendors start putting their interfaces and insertion
points up to par by providing full robustness testing along with
corpus distillation, then I'm pretty sure they are just another 0day
waiting to happen. Unless the 0day has already happened. Which it has.
Just so you know ;>

But pulling all this together is damn expensive and hard, and it's effectively science fiction for now. But I think 
we'll get there in 5 years or so. And in a lot of ways it's just really complex whitelisting, due to the signal to 
noise problems you point out. These baby steps are adding automated enforcement to the mining...

I can see where you are confused. Dave mentions things like AppSensor
and an "Application Security Operations Center". You immediately think
of the current paradigm of product-based WAFs and SOCs filled with
clueless newbies that get paid 10-20 US dollars per hour to monitor
them.

That model is never going to produce anything of value. At least not
until we can solve the core problems, which is a lot more than 5 years
away. Perhaps it's 20 years away.

___________________________________________________________________________________

So what you see is the start up of what I like to call the "Application SOC". It's
like a network SOC, but way more expensive, and with the chance of being actually
useful! :>

I'll go more into this whole thing when El Jefe goes into Beta, but for now, who has
gotten caught by something like this?

- -dave

Question here, because I'm totally missing the context: What is "El Jefe"?

There are some interesting tools that could fit the ASOC market.

The first was probably HP/SPIDynamics with their Assessment Management
Platform. Zynamics released a tool called BinCrowd. Metasploit
released Express and are now thinking of integrating The Dradis
Framework. HoneyApps is about to come out of beta with their Conduit
product. The Denim Group is working on a similar tool as Conduit
called Vulnerability Manager. The great thing about many of these
products is that they have an open API and include out-of-the-box
support for the cheaper penetration-testing tools such as Burp Suite
Professional and IDA Pro.

I am a fan of these aggregation tools as well (note: these are the
types of tools that need to be combined, not that stupid
WAF+SSL-VPN+Trusteer crap you talked about before), because they often
provide the necessary proverbial one-day answer to "Do we have data
mining for our risks-to-vulnerabilities, vulnerabilities-to-risks, and
a timeline of when, who, and how they were discovered?".

Additionally, I would want to mention that we need to work with
vulnerabilities as software weaknesses and understand the control used
to remediate that software weakness versus the control currently in
place. If the exiting control is to implement a WAF in front of the
HTTP(S) component of the app, then we need to state clearly in the
risk management that the current control is not sufficient to properly
protect the asset. The WAF may be a requirement due to compliance or
other obligation (such as bad decision-making processes at the
management level). However, the proper way to prevent [insert CWE
here] is to build a software-level control in the primary software
architecture (i.e. preferably the base class libraries, but otherwise
it should be in a centralized-architecture component such as OWASP
ESAPI). A map of "how to get to the correct control(s)" needs to be
built regardless of how many WAF and WAF-like band-aids cover all of
the HTTP(S) and non-HTTP(S) insertion points into that app (or other
classes of vulnerabilities that do not rely on insertion points such
as proper data encryption).

Cheers,
Andre
_______________________________________________
Dailydave mailing list
Dailydave () lists immunitysec com
http://lists.immunitysec.com/mailman/listinfo/dailydave


Current thread: