Secure Coding mailing list archives

Security in QA is more than exploits


From: Paco at cigital.com (Paco Hope)
Date: Wed, 4 Feb 2009 22:26:17 -0500

For starters I believe you misinterpreted my comments on QA. I was in
no way slamming their abilities. With this in mind comments below.

Sorry about that. I am sensitive to the bias. I went to a very small company
once (10 people total) and as I looked around I saw offices with big LCDs (I
assumed management) and cubicles with multi-core, multi-monitor setups (I
assumed developers). Then I saw this old wooden table with 3 5-year-old HP
desktops and a 15" tube monitor. I said to my host, the dev manager, "that's
the QA workstation." He looked surprised and said it was. He asked how I knew.
I said "because it's a piece of junk!" This plays out a lot in industry, both
big and little.

So I apologize if I misread and found bias where none was intended. I'm ever
vigilant against it.

Before anyone talks about vulnerabilities to test for, we have
to figure out what the business cares about and why. What could
go wrong? Who cares? What would the impact be? Answers to those
questions drive our testing strategy, and ultimately our test plans
and test cases.

We absolutely agree here. At the same time an externally exploitable
sql injection needs to get fixed.

Let me shock and appall people by saying "not necessarily." It is commonly
believed that some bugs are so horrific that we can say, without considering
the business context, "they must be fixed." 10 years ago we said this about
buffer overflows. "If you find a buffer overflow, you *must* fix it
immediately." Then we went into industry and found out that there were times
where missing a market window was far more costly than releasing a known
buffer overflow. Ditto for the most horrendous web vulnerability you can think
of. I resist absolute statements like this.

As Andy Steingruebl pointed out "you also prioritize around effort to test and
avoid, right?" Of course. We all agree that the cost of the fix is weighed
against the benefits of not fixing and an estimate of the impact of successful
exploitation. And that's why it's always possible you'll find a bug that
sounds horrible, but is released anyways.

Andy also said "I think we lose something when we start saying 'everything is
relative.'" I think we lose something more important if we try to impose
abolutes: we lose the connection to the business. No business operates on
absolutes and blind imperatives. Few, if any, profit-focused businesses
dogmatically fix all remotely exploitable SQL injections. Every business looks
pragmatically at these things. Fixing the bug might cause the release of the
product to slip by 6 weeks or a major customer to buy a competitor's product
this quarter instead of waiting for the release. It's always a judgment call
by the business. Even if their goal and their track record is fixing 100% of
sev 1 issues before release, you know that each sev 1 issue was considered in
terms of its cost, impact, schedule delay and so on.

In your experience do you find average QA people doing risk
management?

Not all of them. Actually our experiences parallel nicely. My point is that
any weak QA practitioners we're seeing in the marketplace are not QA folks who
are short on security training. We're seeing QA folks who are short on QA
training. When I find QA folks who are up-to-date on the state-of-the-practice
in modern QA, teaching them a little security is a lot easier. When we go to
teach security to folks who are already behind in their basics, we're building
a castle on shaky ground.

Actually the main goal of the article is that information security
people need to set appropriate expectations as to what QA cares about
as their primary business function. They need to factor in that the
majority of QA people don't care about security as a primary job
function, and that if infosec wants them to care they had better
be prepared to speak their language and understand their needs

So I'll continue to violently agree with you. :) QA is a process of taking
inputs in the form of requirements (use cases, stories, etc.) and producing
evidence of correct behavior (in both expected and unexpected situations). If
infosec wants to give QA something they can consume and use directly, security
requirements would be a great artifact. They fit the QA workflow, render
explicit the security expectations, and foster traceability and test case
development.

It is an outstanding idea for infosec guys to provide security test cases, or
the framework for them, to QA. That beats the heck out of what they usually
do. However, a bunch of test cases for XSS, CSRF, SQL injection and so on will
not map easily to requirements or to the QA workflow. At what priority do they
execute? When the business (inevitably) squeezes testing to try to claw back a
day or two on the slipped schedule, can any of these security tests be left
out? Why or why not? Without hanging them into the QA workflow with clear
traceability, QA will struggle to prioritize them correctly and maintain them.
Security requirements would make that priority and maintenance
straightforward. At this point I'm not disagreeing with you, but taking your
good approach and extending it a step farther.

Cheers,
Paco
--
Paco Hope, CISSP - CSSLP
Technical Manager, Cigital, Inc.
http://www.cigital.com/ - +1.703.585.7868
Software Confidence. Achieved.


Current thread: