Security Incidents mailing list archives

RE: Incident investigation methodologies


From: Harlan Carvey <keydet89 () yahoo com>
Date: Mon, 7 Jun 2004 08:23:29 -0700 (PDT)


It shouldn't lead to paralysis. 

No, it shouldn't.  However, what sort of reaction
would one expect from the uninitiated who monitor this
list?  One person posts and says that they checked the
output of netstat on an XP box, another says "a
hacker/rootkit could have..." (note: without ever
taking things like WFP, necessary privileges, etc.,
into account).  For folks lurking on the list and just
monitoring things, they have to be thinking..."if the
hacker/rootkit/etc. can do all of these things, then
what's the point of doing anything?"

Sorting out what could have happened from
what probably did happen is the whole point of
forensics, isn't it?

And that's my point, as well...but taking it one step
further, by providing a hands-on methodology (for
review) so that sysadmins can determine (a) if
anything happened, and if so (b) what happened. 

Possibilities should be the stepping off point for
additional testing if you
want to be thorough. Example: online testing might
be suspect, so boot from
known good media and do another check after
gathering the information from
the live system. If you think BIOS trojaning is a
possibility, connect the
disk to another system to check it out in addition
to the other tests.
Compare results from multiple methods.

This is right along with what I'm recommending.  If
you have a suspicion of something, test to verify it. 
Speculation gets you nowhere.  With the necessary data
to justify additional effort, the proper testing
procedure/methodology can be very beneficial.
 
Rather than stopping discussion of possibilities,
wouldn't it be more useful
to challenge people to come up with a good way to
test for them?

This is what I'm trying to do.  I'm not saying that we
shouldn't stop considering possibilities.  What I
would like to see, however, is a more
academic/thorough approach, rather than simply saying,
"...a hacker/rootkit could have...", and leaving it at
that.  For example, under what conditions could the
hacker/rootkit do those things?  User mode rootkits on
Windows require the ability to debug programs...so
ensuring that users do not have this privilege is one
way of preventing the installation of such things.
 
Plus any need for preservation automatically
excludes
some types of investigation. 

You're right.  However, talking to several folks in
the security profession who do incident response, many
cases are non-litigious in nature.

But the end result should include the caveat that
other possibilities exist.

Sure.  However, information collected from the system
would be suitable for ruling possibilities out, or
greatly reducing their likelihood.
 
"I don't look for it because I've never seen it"
doesn't sound like a
philosophy that guides anything useful. 

Of course not.  But ruling something out, or reducing
it's likelihood based on data/facts collected from the
system *is* useful.

Folks within the security profession, and even
those on  the 
fringes, are doing themselves a huge disservice. 
Posting to 
a public list/discussion that something *could*
happen serves 
no purpose, and greatly reduces the signal to
noise level.  

Should we instead limit ourselves to discussing
documented functions of
known wild tools and intrusion methods that have
been analyzed by an
appropriate expert or through best-practice methods
on systems with full appropriate logs? 

No, we shouldn't...I haven't suggested such a thing. 
What I am suggesting is that those of us who do this
sort of thing, or are interested in doing this sort of
thing, take a more academic approach to how we do it.


Instead, what I'm suggesting is that we, as a
professional 
community, look to repeatable experiments in those
cases 
where we do not have actual data.  By that, I mean
we set up 
and document our experiments to a level that
someone else can 
verify them...run them on the same (or similar)
set up and 
get the same (or
similar) results.  

This sounds great to whatever extent people can do
it. It should work well
for publicly available malware, and will help us all
find out more facts and
details and methods that will help with us against
at least 99% of the
malware we encounter. Probably the same 99% that
existing sites already
cover, but we might get some better details.

I'm not sure where you're getting your "99%"
information.  If this sort of thing is already
publicly avaiable, then why are we in the current
state we're in?  Rather than saying something like
"...the same 99% that existing sites already
cover...", why not provide a link or two to such
sites?  Since you're already aware of them, can you
provide a link or two to any site that has performed a
malware (specifically, rootkit) analysis to the degree
that I'm describing?  I'll narrow it down to just
Windows systems.  I know that rootkit.com does not
provide this level of information...if it does, please
provide a direct link, and I shall stand corrected.  

Not everyone will be willing to publicly share the
intimate details of a set
of new malware that they find within their
organization, though. They would
be doing the community a service, but they would be
sabotaging their own
organization by revealing the holes in their own
investigation. That's part
of what makes public lists useful - you can get some
answers or ideas
without revealing everything.

No one is asking anyone to reveal everything.  If a
bit of malware is found within an infrastructure, it
can be provided to others to perform the analysis, if
the individual who found it is unwilling or unable to
perform the analysis themselves.

This works for major distributions of a rootkit, but
fully defining every
minor variation of every component of every rootkit
is not going to happen.

Nor should we expect it to.  However, what we *can* do
is use information collected for the analysis of
current rootkits (user- and kernel-mode) to improve
not only out preventative/hardening measures, but also
our investigative methodologies.
   
Speculation is useful when you don't know what to
investigate next. Healthy
discussion in the group should steer speculation
toward investigation, and
usually does after a couple of false starts. But I
don't think a healthy
list should shut down speculation or ridicule it.

I don't agree with that.  Take recent posts for
example.  An original poster (OP) stating that they
didn't see anything suspicious in the output of
netstat.exe is a somewhat regular occurrance in the
list.  Someone responding with "a hacker/rootkit could
have trojaned/modified netstat.exe or the DLL(s) used
by netstat.exe" is speculation...things such as nmap
scans from the outside, the status of WFP, etc, are
not taken into account.  

Educated, knowledgeable suggestions with regards to
additional information to collect and analyze are much
preferable to speculation.  

Paranoia is unnecessary because they *are* out to
get us.

Perhaps.  However, computers are inherently
deterministic, relying on discrete states at their
very core.  I agree that system complexity perhaps
clouds some of this deterministic nature, but I also
think that information, facts, and data can and should
be used, rather than speculation and armchair quarterbacking.


Current thread: