Dailydave mailing list archives

Re: Security people are leaches. [sic]


From: Shane Macaulay <shane () security-objectives com>
Date: Sat, 08 Aug 2009 18:15:09 -0700

One idea I had for handling this issue is to train some baysiean
classifier's on to all of the given bug's identified for any OSS (this
is the fix set).  Definitely you could also have a super set of all/any
bug you can find, but the benefit of training against the self-similar
coding style of some codebase would increase the hit rate.  What would
be handy, is to retroactively process the commit logs  for every line of
code, you can then identify the crappy dev's from the not-so-crappy or
maybe the just-crappy-close-to-shipcommit dev's (OSS ship = when
everyone else in my project says so :).  Granted this would not get all
security bugs and it will not even get all of anything, except for all
that itself finds (hah), but you would be able to identify historically
dodgy claims.  You could go on to assign security impact ratings to
source commits and identify the relative "secureness" of some set of code. 

You could surely find bugs by using this approach, you would need to
modify the strategy a bit.  So if we are already training on all "known"
security fixes, these will likely be highly uniform in their structure
(by the way, would treat the data as opaque really, only leveraging
lines added/removed and combinations of that), these are the commits
which will be under heavy public scrutiny, as such the dev in question
would typically work hard to isolate the exact problem and fix only this
1 issue (fearing high regression or being accused of sitting on a
security issue long enough to slip in 1 more feature!).  To find future
bugs, your going to have to do some root cause failure analysis, which
is super easy in this case, by implementing a historyflow ((the best
looking :) http://benfry.com/revisionist/,
http://www.research.ibm.com/visual/projects/history_flow/,
http://sourceforge.net/projects/historyflow/)  engine.  You can then go
from fix (which is present day fact), back to original developer
feature, this feature set (heh), looks to the future for bugs in OSS
software (this is at the end of the day, the more "fun" work for
security people, not arguing semantics with specification jock's (not
sec ppl)), given that you can't have too much fun, the work part (or
maybe the part to charge $ for), is the fix rater. 

You can then track the relative need or requirement for businesses to
fix a bug in any given OSS codebase.  Surely the other metrics are fun
and interesting, like who copies who and who the top-worst/best
developers are (real world, not some contrived coding competition) no
doubt would make some great marketing.  Give out all your data 3-6
months retroactively so nobody calls you a liar and sell current data.

Seems like some COTS mashup would go far here to get results, I bing'd,
citeseer'd and google'd around'd for a minute, did not really find too
much of an attempt to do this in any serious way, frankly I'm shocked.

I could go on, but really, security is not something for developers to
play with, just like features are not something that security people
should be playing with.  Stay in your own sand box and we can all just
get along.  Given you can do one or the other, not both, any attempt to
do so is an inherent conflict of interest which means that neither
interest is being served (i.e. the bias will undo and waste any such
effort).

Stopping myself from going on now.
Shane


dave wrote:
Normally I would, of course, kill this thread, but there's lots of the
Linux Kernel/Vendor security community subscribed to the list, and I
think it's important for them to hear the story. Right now, Linux kernel
security is 5 years behind Windows. There's just no leadership on the
issue - and it doesn't have to come from Linus Torvalds or the
development leadership.

Partially, Linus is right - there really is no way to have developers
truly know the security ramifications of every change they commit or
every bug they fix. But on the other hand, the GRSecurity team and
others have shown that for very little additional investment, one small
team of good people (throw a half million USD a year at it and be amazed
at the results!), the Linux community could be vastly benefited. Modern
software development DOES have to incorporate a security model, and
Linux is no exception if it wants to be successful.

It's always hard for security vendors to learn the lesson from Andrew
Cushman about how to handle security researchers. Quite literally, no
matter how much security researchers piss you off, you have to embrace
and extend their efforts and their community. It's the only way. Every
other way, from Denial, to Legal Threats, to Massive PR Effort, just
results in continued failure. If a Linux kernel developer suspects their
patch has security relevance, and deliberately hides that in their
commit message, they are in the Denial phase. The fact that people can
be mean when they point that out doesn't change the real failure.

In this case, the best move for Linux as a whole is to develop a
security center of excellence, possibly hosted somewhere where multiple
vendors can contribute to it, and work together to help with Linux's
(kernel) security problems. They can start by going through new kernels
and pointing out which changes may be security relevant, while training
up key Linux developers on modern security techniques.

Otherwise it's just not a fair fight. I do so love a fair fight. :>

-dave

RB wrote:
On Fri, Aug 7, 2009 at 11:41, Aaron<apconole () yahoo com> wrote:
The 'shades of grey' only exist to security people. To no one else
is it
important
that a bug disclose information, allow invalid root access, or escalate
privileges.
Rather, 'shades of grey' only exist to critical thinkers who actually
understand the problems.  If you really think privilege escalation and
information disclosure are esoteric problems that should be relegated
only to "security people", I know a few thousand non-security system
administrators that would like you to stop whatever you're doing and
go flip burgers.  Pretending that there is no such thing as a security
bug is a childish pretense and is the equivalent of closing your eyes
and assuming nobody's there because you can't see them.

So the point still stands, why burden the average kernel
developer/debugger
to do
security research work for the security researcher?
Because, although rather vocal, researchers compose a numerically
insignificant subset of the security "industry".  The vast majority
are sysadmins, engineers, and programmers that need to prioritize
fixes based not only on functionality but on exposure as well.  The
expectation is not for kernel developers to perform ad-nauseum
security analysis of bugs, but for them to exercise due diligence and
not suppress security information.
_______________________________________________
Dailydave mailing list
Dailydave () lists immunitysec com
http://lists.immunitysec.com/mailman/listinfo/dailydave

_______________________________________________
Dailydave mailing list
Dailydave () lists immunitysec com
http://lists.immunitysec.com/mailman/listinfo/dailydave


_______________________________________________
Dailydave mailing list
Dailydave () lists immunitysec com
http://lists.immunitysec.com/mailman/listinfo/dailydave


Current thread: