Security Incidents mailing list archives

RE: Incident investigation methodologies


From: "James C Slora Jr" <Jim.Slora () phra com>
Date: Fri, 4 Jun 2004 20:06:12 -0400

Harlan Carvey wrote Wednesday, June 02, 2004 14:39

As mentioned earlier, it's very easy to sit back and say "a 
'hacker' could...".  Yes, that's true...but what that 
ultimately leads to is complete paralysis. 
At a certain point, there are so many things that an attacker 
*could* do that there's no sense in doing any sort of 
forensics on the system, live or otherwise.

It shouldn't lead to paralysis. Sorting out what could have happened from
what probably did happen is the whole point of forensics, isn't it?
Possibilities should be the stepping off point for additional testing if you
want to be thorough. Example: online testing might be suspect, so boot from
known good media and do another check after gathering the information from
the live system. If you think BIOS trojaning is a possibility, connect the
disk to another system to check it out in addition to the other tests.
Compare results from multiple methods.

Rather than stopping discussion of possibilities, wouldn't it be more useful
to challenge people to come up with a good way to test for them?

There's not time in the world to investigate every possibility, even if you
could imagine them all, so there has to be a reasonable cutoff point to the
investigation. Plus any need for preservation automatically excludes some
types of investigation. The cutoff point depends on the goals of the
investigation, not on how sure you are in your own mind.

But the end result should include the caveat that other possibilities exist.
I'd expect it to include a not too detailed list of possibilities that were
considered but discounted for lack of time or because they were not likely
enough to be worth investigating.

"I don't look for it because I've never seen it" doesn't sound like a
philosophy that guides anything useful. I remember hearing that in 1993
about computer viruses.
 
Folks within the security profession, and even those on  the 
fringes, are doing themselves a huge disservice.  Posting to 
a public list/discussion that something *could* happen serves 
no purpose, and greatly reduces the signal to noise level.  

Should we instead limit ourselves to discussing documented functions of
known wild tools and intrusion methods that have been analyzed by an
appropriate expert or through best-practice methods on systems with full
appropriate logs? If those are the only possibilities worth considering, we
should be able to copy the logs then run our favorite anti-virus and
vulnerability scanner and be done with the investigation.

Instead, what I'm suggesting is that we, as a professional 
community, look to repeatable experiments in those cases 
where we do not have actual data.  By that, I mean we set up 
and document our experiments to a level that someone else can 
verify them...run them on the same (or similar) set up and 
get the same (or
similar) results.  

This sounds great to whatever extent people can do it. It should work well
for publicly available malware, and will help us all find out more facts and
details and methods that will help with us against at least 99% of the
malware we encounter. Probably the same 99% that existing sites already
cover, but we might get some better details.

Not everyone will be willing to publicly share the intimate details of a set
of new malware that they find within their organization, though. They would
be doing the community a service, but they would be sabotaging their own
organization by revealing the holes in their own investigation. That's part
of what makes public lists useful - you can get some answers or ideas
without revealing everything.

How do we do this?  Here's an example...instead of saying 
"...a rootkit could...", get a system and a rootkit, and 
install it.  Find out what that particular rootkit does.  
Document your testing setup (ie, WinXPSP1, etc), what you did 
to test (ie, ran FileMon and RegMon) and what you found.  

This works for major distributions of a rootkit, but fully defining every
minor variation of every component of every rootkit is not going to happen.
 
While it's entirely possible that a rootkit *could* do 
something, why not base what we do in fact, rather than in 
speculation, rumor, and paranoia?

Fact is very useful and we certainly could use more of it. 

Speculation is useful when you don't know what to investigate next. Healthy
discussion in the group should steer speculation toward investigation, and
usually does after a couple of false starts. But I don't think a healthy
list should shut down speculation or ridicule it.

Rumor, although frequently wrong, is often the best early warning because it
spreads quickly and is unhindered by plodding methodology. It does need to
be followed up by methodical investigation and proof as soon as possible.

Paranoia is unnecessary because they *are* out to get us.



Current thread: