Secure Coding mailing list archives

IBM Acquires Ounce Labs, Inc.


From: arian.evans at anachronic.com (Arian J. Evans)
Date: Tue, 4 Aug 2009 22:23:28 -0700

Kevin -- excellent points. Starting on top:

+ this is happening... (really!)

+ "dynamic scanning" vendors are getting together to add/share more
data-points and lessons with:

++ WAF vendors
++ static-analysis automation vendors
++ consultants doing Pen-Testing, static analysis, threat modeling,
source reviews, etc.

It is all fresh and fairly immature, but I expect it to evolve
quickly. So don't give up hope yet :)

I do not see dynamic "scanning tools" vendors working together due to
market competition/differentiation (yet, at least) but I do see
dynamic scanning platform vendors (like my employer) reaching out to
the consulting community to figure out how to give them a better
platform from which to automate their bulk work (test every FF for
XSS, etc.) and add in custom testing/pattern matching. As you probably
are aware, even patterns in highly bespoke applications can often be
applied to others (in the same enterprise or globally). In fact, the
current generation of runtime CSRF tests I work with are an evolution
of extrapolating patterns from "bespoke" applications and finding out
how often they occur across unlike applications. (often)

If you have more specific examples/needs - feel free to contact me
directly Kevin to discuss further.


On Tue, Aug 4, 2009 at 8:35 PM, Wall, Kevin<Kevin.Wall at qwest.com> wrote:

It's a pity that the these dynamic-scanning vendors can't work together to
come up with a common approach to at least helping this automation
you speak of at least part way along. (Yes, I know. I'm dreaming. ;-)

You are spot on. And all these are great ideas, but the implementation
is where it gets tricky...


Some ideas that I've had in the past is that they could request and make
use of:
1) HTTP access logs from Apache and/or the web / application server.
? These might be especially useful when the logs are specially configured
? to also collect POST parameters and then the application's regression
? tests are run against the application to collect the log data. Most web /
? app servers support Apache HTTPD style access log format, so parsing
? shouldn't be too terribly difficult in terms of the # of variations they need

This is a great idea, and one we have juggled around internally quite
often regarding how best to handle implementation. At one point we
went down exploring server-side agents to actively collect and report,
but in my experience very few (<1%) of users can deploy agents like
this on their production systems. And if they do, they are the first
thing blamed for any issues and get removed (and after being proven
"innocent" are still hard to re-add).

I am thinking the better (though less effective) implementation is
either (a) user-driven-upload feature for such files, or (b)
client-side-parsing script you can run on a dedicated machine you
control, and point at these config files to parse and upload the
results to your dynamic testing vendor.

I have been looking for a "configuration-management" vendor that
provides this sort of "config-file management" that is common in the
enterprise. After talking to many customers, as recently as BH Vegas
this year, I cannot find any such vendor. Does one exist? (I have seen
a few tools that do this over the years, but it seems like no one uses
them). A vendor-supported config-management tool would be a great (and
easy) hookpoint. Kind of like DNS server records for network-VA/PT
testing, but on an "application entrypoint" layer.

I would definitely like to hear more of your thoughts here. (on or
offline) Unfortunately -- very few customers I work with ask for this
type of thing. While I would love to provide it -- most are still
asking for features to find/classify all of their enterprise
application assets. /a_priori_but_related_problem


2) For Java, the web.xml could be used to gather data that might allow some
? automation, especially wrt discovery of dynamic URLs that otherwise difficult
? to discover by autoscanning.

Exactly. Also useful for identifying package mismanagement,
accidentally deployed modules, and "backdoors".


3) If Struts or Strut2 is being used, gather info from the Struts validators (forget
? ?OTTOMH what the XML files called where this is placed, bot those are what I'm

Same goes for most modern frameworks. Too bad we do not have a
standard 'web.config' file-format for frameworks.


4) Define some new custom format to allow the information they need to be
? ?independently gathered. Ideally this would be minimally some file format
? ?(maybe define a DTD or XSD for some XML format), but their tools could offer
? ?some GUI interface as well.

See above. I have also thought about a user-extensible script that
would allow folks to tweak it to parse multiple types of config files
across multiple frameworks/platforms, and normalize it into one big
"config.xml" to feed into their testing framework. Thoughts?


Of course, I'm not sure I'd expect to see anything like this in my lifetime.

Don't be too cynical. There are those of us that see the need, and
want to build these tools for you. :)

Dynamic testing companies can also work with companies like Veracode
to capture/enable extra data-points for testing. And vice-versa. I
have found vulnerabilities in the past on applications after they were
rigorously "source-code scanned" and "fixed", simply because the
source code scanning tool was not properly including linked libraries
(where the exploitable weaknesses lived). A pity we could not feed
those back to the source-scanning tool vendor.


At this point, most of the users of these tools don't even see this as a need to
the same degree that Arian and readers of SC-L do and it's not clear how

Eh, that is the problem. But I think it is changing. Last year I had
maybe one or two discussions/requests from folks to help them solve
this type of problem. This year I have had more requests like this
just in the first six months. Maturity of solutions-seekers growing.
++optimism.


copying ideas from one another. ?The other significant driver AGAINST this
as I see it as many vendors sell "professional services" for specialized
consulting on how to do these things manually. That bring in extra $$

Working for a software company -- I can say first hand we want to
build *tools* to help end-users, consultants, robots, whomever solve
this problem, without fielding an army of consultants. I think most
software companies would prefer to do this. Pro-services @software
companies often exist because of the need for revenue early on, or
weaknesses in the product, or simply demand from customers. I know I
prefer to automate things and make them easy for people, rather than
fly around a lot on airplanes. :)


into their companies so convincing them to give up their cash cow is
a hard sell. And as a purchaser of one of these tools, if you don't have
the needed expertise in house (many do, but I'm guessing a lot more
don't), it's hard to tell your director that you can't use that $75K piece of
shelfware that your security group just bought because they can't figure out
how to configure it. Instead, they are more likely to quietly just drop another
$10K or so for consulting discretely and hope their director or VP doesn't

Or $100k+ when you start talking about appsec tools, but point taken.

Again -- I think this solves itself somewhat as you (the paying
consumer) demands more, votes with your dollars, and as we see more
cross-analysis and cross-functional integration between tools vendors,
and you get more extensibility in your testing tools allowing you to
leverage automation more smartly.

My $0.02 FWIW,

-- 
Arian Evans

"It is incumbent on every generation to pay its
own debts as it goes. A principle which if acted
on would save one-half the wars of the world."
-- Thomas Jefferson



Current thread: