Dailydave mailing list archives

Re: This mornings Security Wire Perspectives - Ira's proof of concept code article.


From: Julio Patel <smerdyakovv () gmail com>
Date: Mon, 29 Nov 2004 21:39:00 -0500

On Mon, 29 Nov 2004 15:41:19 -0800, robert () dyadsecurity com
<robert () dyadsecurity com> wrote:
From: Julio Patel <smerdyakovv () gmail com>
So are gun makers, and the Sharpie pen company, spray paint
manufacturers, baseball bat makers, etc. etc.  The tools have dual
purposes.  Security Researchers are not responsible for the criminal
actions of others.

Upper Bound.  Lower Bound.  Anyone wanna shoot for something in between?

Sure. How about disclosure being non-relevant in a non-uniform global
security model such as the internet (let's call it security model A)
connected to a Private Company (we'll call it security model B). The
models differ because of ownership of information and what each model is
protecting. Integrity or Confidentiality, Ownership of information where
security is (more) based upon disclosure of flaws in software to protect
the integrity of anonymous people/systems (call it swarm intelligence if
you want), and hierarchal ownership of information with segmentation,
where disclosure would not improve the security posture of that
distributed entity (as only certain people own Information Security) and
would cause you to lose your job. I guess the problem is where you try
and pick one model of flaw disclosure to fit both incompatible cases. I
could care less, I guess I side more with information flow protecting me
most of the time, but I don't think it really matters overall in a
global sense.

You must be the sales guy....all that fluff above just to say that,
yes you do fall somewhere between the extremum.  great.


"If I do all of the security Quality Assurance work for the vendor for
free, I should be thrilled that they get to dictate when that
information can be made public.  If I'm a good little security
researcher and work on their schedule, they might just be nice enough to
give me credit in their version of the advisory".

Easy there.  We know you're with Dave.

How was that assertion made? I think there is a bigger picture here. I
like picking both sides to argue with personally, I find it more
rewarding. That might indicate there is a larger underling problem at
work here. I think (full/selective) disclosure advocates and
vulnerability researchers should be murdered, just kidding I think
exploits should be released first in the underground and traded into a
vast network of criminals, and the vendor should not be informed. Hrmm
perhaps I'm not getting my point across correctly, something about
religious wars being based on differing perspectives seems non-fruitful,
and more emotional than rational.

you're still middle-of-the-road?  OK.



Are you talking for all the end users?  Oddly, I might want the patch
before the exploit code...it depends on the situation.

But what if I have the exploit code already? Who is the provider of the
security flaw information? Is the vendor the person providing the
security flaw information or are they the ones that failed to deliver a
product that is secure in the first place necessitating under fire patch
management? How about teams of people who find this information and
don't disclose it at all, but are capable of malice against you?

Yepper.  That's why *I* said "It depends on the situation".  


Now that we understand that does it still matter if the vendor is
notified first or the exploit is available first? I seriously doubt that
most public disclosures of flaw information (perhaps even bad
configuration defaults, or hard to implement security controls) are done
without some attempt at working with the vendor. Some vendors take a
stance of apathy and some are really good about being responsible, and
even then its not fully consistent. As I write this I understand why I
never talk about disclosure. For ME as a security related person, I
would prefer to have the advisories include good details about the flaw,
so personally I can assess the threat to me, and from the other side of
the fence I can use this information to verify exploitability of my
resources. I can write exploit code, or test for flaws, or assess if I
need to take immediate action because I am impacted by a specific flaw.
If someone forgets to write an exploit I don't care, and I shouldn't
sleep any better if they don't because there are plenty of people who
make it an art form to create exploits based upon very little
information in a very short amount of time. We just need to wake up and
be realistic here. Who has the information, Who is in control of the
information, Who can create new information, and Who is impacted.
Certainly we can switch these around quite a bit and come to different
conclusions. If we withhold exploit code aren't we just being arrogant
to think that this is improving things? This is the first thing that
comes to my mind. Of course, as you can see, my opinion tends to sway
like a 100 foot rubber light pole in high winds.


A woman went to the doctor and told him that she had hurt herself
while golfing.  The doctor said, "where?" and she replied "between the
first and second hole"...The doctor told the woman...."your stance is
too wide"....haha, get it.  your stance is too wide.



HAHA.  Yeah, but you woulda been pissed if your sister broke the lock
and then told the other kids about it before giving you the chance to
put it in the garage with daddy's beemer.

I locked my bike, but it was stolen because I failed to understand that
my bike DMZ was smaller than I thought it really was, but I guess it
might not have mattered if my lock sucked anyhow, that would have made
me feel better I guess, but thats not important right now.

we figured out how to jiggle loose those male/female combination bike
locks.  for months we had the nicest bikes in the hood.  if you grew
up in ann arbor, i may have gotten yours.  I was partial to Schwinns
back in the day.



apparently, neither do you.  You do know that many scanners check for
package versions, registry keys, dlls, etc. locally, right?  I'm not
saying that all scanners and all checks use local access.  I am saying
that many do include the ability to do the 'scanning' this way.  Ira
took an extreme, you've taken up the flag for the other extreme, and
the truth....well, it's out there somewhere.

You mean you can read files locally? Wow I didn't know that. So your
saying if I had a shell on a box I could simply query a specific
software version directly? THOUGH THE PREFERRED METHOD OF INFORMATION
PRESENTATION? Perhaps Robert is talking about something else here. I'm
just going to guess he is, certainly he isn't saying what I think you
think he is saying. But it's fair to assert that there exists two
extremes, however misplaced the assertion is in context to what you are
trying to say.

So, Ira said:
"Security professionals need to test for the presence of the
underlying vulnerability, but this can be done with a scanning tool
or examining the software version and settings -- it doesn't require
the exploit."

and Robert said:
"People who follow this advice clearly have no concept of what their
automated scanners are doing or even how they are developed."

So, Ira was right.  An automated scanner *can* often test for exploits
via the network (without exploit code) and even more often if the
scanner is configured to do the checks locally.

hmmm.  I took Ira's statement to mean that if you're a pen-tester
worth half a chit, you'll be able to come up with a way to test for
the vuln on your own.  Note that I'm not saying that Ira is worth half
a chit...but, if you're contracted to scan a newtork and can't come up
with your own stuff, then your precentage of chit is on the decline.

OK so I'll take the role of translator here. Perhaps what he might be
saying is that writing exploits takes time? And that writing an exploit
for every potential vulnerability you find based upon partial flaw
information doesn't lend itself to the efficiency needed for being
thorough? It's not really a `kcid' swinging contest here I don't think.
It's not a question of if we can, its a question of getting the job done
perhaps, and using verification methods that don't lead to false
positives and negatives and having access to information (perhaps
contained within the exploit?!?!?!?!?!?!) to create new ones for
variants of software encountered during a test (that might speed up
exploit development don't you think??). 

This is pretty much what Robert already said....he needs exploits (or
at least detailed tech info) to do better pen-tests.  OK,
Full-disclosure fits your business model...what's your point?  You've
already stated that your opinions are all over the board, there is no
single disclosure model, blah, blah, blah.  Are you now taking a
position?


I don't know if you know this
but there is a natural tendency for people to specialize and work in
parallel to solve hard problems like exploit development and short time
frames and thoroughness in testing, all of these things help to PROTECT
people.

Fer real?  dang, I hadn't figured that out yet.  I'm glad I found this
here mailing list...I'm learning new stuff every day.


one, you're contradicting yourself above.
two, this is about as silly as the 'encrypting email' post.  re-read
what you just posted.  are you saying that gateway devices should be
doing signature matching on known exploits?

I think that Robert is easy to misunderstand for his cryptic humor. If
it makes him feel better I chuckled when I read this. Sometimes people
make simple contradictions for humor, you'll find its a common tactic
with certain personality types.

Ah, humour.  I'll have to remember that the next time I'm debating someone.  
Them: You've just contradicted yourself.
Me: ummm, I was just kidding?
Them: oh bollocks!  I thought I had you for a minute there.


I think in closure to this highly scattered email is to say that perhaps
we should design software with crumple zones. Perhaps we should be able
to enforce stronger more easy to manage security models within COTS
hardware/software. Perhaps we shouldn't count on discretionary complex
models for managing information for novice users who have no idea what
that means, and for institutions where information ownership is
straightforward and has no place for individual discretion but rather
has information security policies that they currently have no hope them
being enforced. We can argue till we are blue in the face about what
sort of dance we need to do when something breaks, but that isn't
something that will prevent injury. This is not the aftermath body-count
industry (or is it?), people are going to make mistakes, people are
going to configure servers incorrectly, I will make mistakes inside my
code that lead to remote code execution, and I will badly configure a
server so that it will accidentally allow someone to do something I
didn't intend, and in the end it won't really matter how anyone found
out. If you don't know how to protect yourself in light of this
inevitability, then you should demand that your vendor address it, or
researchers do something to provide a solution (something that is going
on currently, but not very many people seem to be aware of or care
about).

The ideas of how to do these sort of things are pretty well established,
yet people don't use them. why? its too easy to ignore the real problem
when it seems there is no solution to it, and concentrate on something
else, but it's been too long now. Information is going to play more of a
role in our lives in the future, we must move quickly and find a new
posture that we can manage to reflect this new role of information. My
internet/Company's security are nothing like my bike security problems,
they are much worse.

yeah, don't go overboard there chief.  screw with the infosec
money-making machine and I might not be able to make my monthly
golden-calf payments.


jack


Julio Patel
_______________________________________________
Dailydave mailing list
Dailydave () lists immunitysec com
https://lists.immunitysec.com/mailman/listinfo/dailydave


Current thread: