Secure Coding mailing list archives
Really dumb questions?
From: leichter_jerrold at emc.com (Leichter, Jerry)
Date: Thu, 30 Aug 2007 09:56:19 -0400 (EDT)
| Most recently, we have met with a variety of vendors including but not | limited to: Coverity, Ounce Labs, Fortify, Klocwork, HP and so on. In | the conversation they all used interesting phrases to describe they | classify their competitors value proposition. At some level, this has | managed to confuse me and I would love if someone could provide a way to | think about this in a more boolean way. | | - So when a vendor says that they are focused on quality and not | security, and vice versa what exactly does this mean? I don't have a | great mental model of something that is a security concern that isn't a | predictor of quality. Likewise, in terms of quality, other than | producing metrics on things such as depth of inheritance, cyclomatic | complexity, etc wouldn't bad numbers here at least be a predictor of a | bad design and therefore warrant deeper inspection from a security | perspective? What you're seeing here is an attempt to control the set of measure- ments that will be applied to your product. If you say you have a product that tests "security", on the one hand, people will ask you whether out-of-the-box you detect some collection of known security bugs. That collection might be good, or it might contain all kinds of random junk gathered from 20 years of Internet reports. As a vendor, I don't (quite reasonably) want to be measured on some unpredictable, uncontrollable set of "hard" (easily quantifiable) standards. Beyond that, saying you provide "security" gets you right in to the maelstrom of security standards, certification, government and accounting requirements, etc. Should there be a security problem with customer code that your product accepted, if you said you provided security testing, expect to be pulled in to any bad publicity, lawsuits, etc. It's so much safer to say you help ensure quality. That's virtually impossible to quantify or test, and you clearly can only be one part of a much larger process. If things go wrong, it's very hard for the blame to fall on your product (assuming it's any good at all). After all, if you pointed out 100 issues, the fixing of which clearly improved quality, well, someone else must have screwed up in letting one other issue through that had a quality impact - quality is a result of the whole process, not any one element of it. "You can't test quality in." (This sounds like excuses, but it happens also to be mainly true!) | - Even if the rationale is more about people focus rather than tool | capability, is there anything else that would prevent one tool from | serving both purposes? Well, I can speak to two products whose presentations I saw about a year ago. Fortify concentrated on security. They made the point that they had hundreds of pre-programmed tests specific to known security vulnerabilities. The weaknesses in their more general analysis - e.g, they had almost no ability to do value flow analysis - they could dismiss as not relevant to their specific mission of finding security flaws. Coverity, on the other hand, did very deep general analysis, but for the most part found security flaws as a side-effect of general fault-finding mechanisms. Fortify defenders - including some on this list - pointed out this limitation, saying that the greatest value of Fortify came from the extensive work that had gone into analyzing actual security holes in real code and making sure they would be caught. In practice, while there was an overlap between what the two products found, neither subsumed the other. If you can afford it, I'd use both. (In theory, you could produce a security hole library for Coverity that copied the one in Fortify. But no one is doing that.) | - Is it reasonable to expect that all of the vendors in this space will | have the ability to support COBOL, Ruby and Smalltalk sometime next year | so that customers don't have to specifically request it? Well, the effort depends on the nature of the product. Products that mainly scan for particular patterns are probably easier to adjust to work with other languages than products that do deeper analysis. On the other hand, you have to build up a library of patterns to scan for, which to some degree will be language-specific. Products that depend on deep analysis of data flow and such usually don't care much about surface syntax - a new front end isn't a big deal - but will run into fundamental problems if they rely on static properties of programs, like static types. Languages like Smalltalk and even more Ruby are likely to be very problematic for this kind of analysis, since so much is subject to change at run time. | - Do the underlying compilers need to do something different since | languages such as COBOL aren't object-oriented which would make analysis | a bit different? I doubt object orientation matters all that much. In general, anything that provides (a) static information (b) encapsulation makes analysis easier. COBOL is old enough to have elements that amount to self- modifying code, for which good analysis methods don't exist. But few COBOL programs have used the more dangerous features in many years, so in practice this may be a non-issue. On the other hand, languages like Ruby have resurrected - in a much more controlled fashion - these same methods, so would be difficult to work with. I haven't really kept up with the most recent work, but I doubt the techniques to do any deep analysis of even well-behaved Ruby programs - or, hell, even Java programs that use introspection and dynamic loading and such - exist yet. The fully-enforced dynamic type systems in such languages avoid many significant classes of errors common in other kinds of languages, so the very questions you need to ask are different, and I'm not even sure we know what the right questions are! -- Jerry | | ************************************************************************* | This communication, including attachments, is | for the exclusive use of addressee and may contain proprietary, | confidential and/or privileged information. If you are not the intended | recipient, any use, copying, disclosure, dissemination or distribution is | strictly prohibited. If you are not the intended recipient, please notify | the sender immediately by return e-mail, delete this communication and | destroy all copies. | ************************************************************************* | | | _______________________________________________ | Secure Coding mailing list (SC-L) SC-L at securecoding.org | List information, subscriptions, etc - http://krvw.com/mailman/listinfo/sc-l | List charter available at - http://www.securecoding.org/list/charter.php | SC-L is hosted and moderated by KRvW Associates, LLC (http://www.KRvW.com) | as a free, non-commercial service to the software security community. | _______________________________________________ | |
Current thread:
- Software process improvement produces secure software? Francisco Nunes (Aug 07)
- Software process improvement produces secure software? Goertzel, Karen (Aug 07)
- Software process improvement produces secure software? McGovern, James F (HTSC, IT) (Aug 29)
- Software process improvement produces secure software? Julie Ryan (Aug 07)
- Software process improvement produces secure software? Kenneth Van Wyk (Aug 08)
- Software process improvement produces secure software? George Capehart (Aug 09)
- Really dumb questions? McGovern, James F (HTSC, IT) (Aug 29)
- Message not available
- Really dumb questions? Bret Watson (Aug 29)
- Really dumb questions? Robert C. Seacord (Aug 30)
- Software process improvement produces secure software? George Capehart (Aug 09)
- Really dumb questions? John Steven (Aug 30)
- Really dumb questions? Leichter, Jerry (Aug 30)
- Software process improvement produces secure software? Goertzel, Karen (Aug 07)